Part A: Convolutional Neural Network (CNN)¶

Name: Shaun Kwo Rui Yu Class: DAAA/FT/2A/03 Adm No.: 2317933

The main objective of this assignment is to:¶

Implement an image classifier using a deep learning network for color images of 224 by 224 pixels, containing 15 types of vegetables.

Convert the images into grayscale with two different input sizes:

  1. 37 by 37 pixels
  2. 131 by 131 pixels.

Research and Background¶

  • CNNs in Image Classification: Convolutional Neural Networks (CNNs) are widely used in image classification tasks due to their ability to automatically learn features from images.
  • Data Augmentation: Augmenting the dataset with variations of images can help improve model generalization and robustness.
  • Grayscale Conversion: Converting color images to grayscale reduces the dimensionality of the input data while retaining essential features for classification.

Cited from ChatGPT

Additional Information:¶

There are many ways to improve the model's performance like accuracy being one of the main metrics such as:

  • Improving performance by data
  • Improving Performance With Algorithms.
  • Improving Performance With Algorithm Tuning.
  • Improving Performance With Ensembles.

Cited from: https://machinelearningmastery.com/improve-deep-learning-performance/


EDA and Preprocessing¶

  • Exploratory Data Analysis (EDA): Analyze the dataset to understand the distribution of vegetable images, class imbalances, and data quality issues.
  • Image Preprocessing: Convert color images to grayscale and resize them to the specified input sizes (37x37 and 131x131 pixels) for model training.

Model Architecture and Training¶

  • CNN Architecture: Design CNN architectures suitable for image classification, including convolutional layers, pooling layers, activation functions (e.g., ReLU), and fully connected layers.
  • Hyperparameter Tuning: Fine-tune hyperparameters such as learning rate, batch size, and optimizer choice (e.g., Adam, SGD) for optimal model performance.
  • Data Augmentation: Apply data augmentation techniques (e.g., rotation, flipping) to generate additional training data and improve model generalization.

Evaluation and Comparison¶

  • Model Evaluation: Evaluate the trained CNN models using validation data and metrics like accuracy, precision, recall, and F1 score.
  • Comparison of Input Sizes: Compare and discuss the classification accuracies achieved for each input size (37x37 vs. 131x131 pixels) to analyze the impact of image resolution on model performance.

Importing Libraries


In [2]:
# Basic libraries
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# import warnings
import warnings
# filter warnings
warnings.filterwarnings('ignore')
import PIL
import PIL.Image as Image
from pathlib import Path
import shutil
import glob


# Importing Tensorflow Libraries
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras.preprocessing import image
from keras.optimizers import Adam, RMSprop
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.models import load_model
from keras_tuner.tuners import RandomSearch
from keras.regularizers import l2 

#  Importing Scikit learn libraries
from sklearn.utils.class_weight import compute_class_weight
from sklearn.metrics import confusion_matrix

Exploratory Data Analysis (EDA)


In [2]:
# Finding size of each dataset (train, test, validation)
def get_dataset_size(path):
    data_dir = Path(f'./Dataset for CA1 part A - AY2425S1/{path}')
    # Count the number of .jpg files in the dataset
    image_files = list(data_dir.glob('**/*.jpg'))
    image_count = len(image_files)

    print(f'Total number of image files in the {path} dataset: {image_count}')
    return image_count

train_size= get_dataset_size('train')
test_size= get_dataset_size('test')
validation_size= get_dataset_size('validation')

dataset_size=train_size+test_size+validation_size
print(f'Total images found in the whole dataset: {dataset_size}')
Total number of image files in the train dataset: 9043
Total number of image files in the test dataset: 3000
Total number of image files in the validation dataset: 3000
Total images found in the whole dataset: 15043
In [3]:
def plot_images(directory, title):
    fig, axs = plt.subplots(5, 3, figsize=(15, 10))
    fig.suptitle(title, fontsize=16)

    for idx, subdir in enumerate(directory.iterdir()):
        images = list(subdir.glob('*.jpg'))[:1]  # Get the first image
        for image_path in images:
            image = Image.open(str(image_path))
            ax = axs[idx // 3, idx % 3]
            ax.imshow(image)
            ax.axis('off')  # Hide axes
            ax.set_title(subdir.name)

    plt.tight_layout()
    plt.show()

# Define the directories for train, test, and validation datasets
train_dir = Path('./Dataset for CA1 part A - AY2425S1/train')
test_dir = Path('./Dataset for CA1 part A - AY2425S1/test')
validation_dir = Path('./Dataset for CA1 part A - AY2425S1/validation')

# Plot the first images in each directory
plot_images(train_dir, 'Train')
plot_images(test_dir, 'Test')
plot_images(validation_dir, 'Validation')

# cited from Chatgpt
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

Visualising First Pictures of train,test and validation datasets¶


We can already spot a problem in the Bean picture of the train dataset

where it is an image of a carrot instead a Bean

This can be removed to improve the accuracy of the model during model improvement later

More specific Additional Infomation on improving model with data by:¶

  • Invent More Data
  • Rescale Your Data
  • Transform Your Data (flipping and rotating them)
  • Feature Selection (Deleting Images)

Cited from: https://machinelearningmastery.com/improve-deep-learning-performance/

In [4]:
import matplotlib.pyplot as plt

# Calculate split percentages
train_percentage = (train_size / dataset_size) * 100
test_percentage = (test_size / dataset_size) * 100
validation_percentage = (validation_size / dataset_size) * 100


# Pie chart of train test split datasets
plt.figure(figsize=(6,6))
plt.pie([train_percentage, test_percentage, validation_percentage],
        labels=['Train', 'Test', 'Validation'], 
        colors=['gold', 'lightcoral', 'skyblue'], 
        autopct='%1.1f%%', startangle=90, explode=(0.1, 0.05, 0.05),
        shadow=True, wedgeprops={'edgecolor': 'black', 'linewidth': 1})
plt.suptitle('Dataset Train-Test-Validation Split Percentage', fontsize=16, fontweight='bold')
plt.title(f'Total Dataset Size: {dataset_size}', fontsize=12)
plt.axis('equal')
plt.tight_layout()
plt.show()
No description has been provided for this image

From the Pie Chart above, we can see that¶

The ratio of train: test: validation is 60 : 20 : 20 which is a well balanced ratio of the dataset for model creation.

In [5]:
def images_per_file(file):
    # Initialize a dictionary to store the count of .jpg files in each folder
    jpg_counts = {}

    # Traverse through each folder in the directory
    for root, dirs, files in os.walk(f'./Dataset for CA1 part A - AY2425S1/{file}'):
        # Count the number of .jpg files in the current folder
        jpg_files = [file for file in files if file.lower().endswith('.jpg')]
        jpg_count = len(jpg_files)
        
        # Store the count in the dictionary using the folder name as the key
        folder_name = os.path.basename(root)
        jpg_counts[folder_name] = jpg_count
    
    # Create variables for each folder count
    for folder, count in jpg_counts.items():+
        globals()[f"{file}_{folder}"] = count  # Create variable with name {file}_{folder}
    
    # Print the count of .jpg files in each folder
    for folder, count in jpg_counts.items():
        print(f" {file}_{folder} has {count} .jpg file(s).")
    print('\n\n')

# Call the function for 'train', 'test', and 'validation' folders
images_per_file('train')
images_per_file('test')
images_per_file('validation')
 train_train has 0 .jpg file(s).
 train_Bean has 795 .jpg file(s).
 train_Bitter_Gourd has 720 .jpg file(s).
 train_Bottle_Gourd has 441 .jpg file(s).
 train_Brinjal has 868 .jpg file(s).
 train_Broccoli has 750 .jpg file(s).
 train_Cabbage has 503 .jpg file(s).
 train_Capsicum has 351 .jpg file(s).
 train_Carrot has 256 .jpg file(s).
 train_Cauliflower has 587 .jpg file(s).
 train_Cucumber has 812 .jpg file(s).
 train_Papaya has 566 .jpg file(s).
 train_Potato has 377 .jpg file(s).
 train_Pumpkin has 814 .jpg file(s).
 train_Radish has 248 .jpg file(s).
 train_Tomato has 955 .jpg file(s).



 test_test has 0 .jpg file(s).
 test_Bean has 200 .jpg file(s).
 test_Bitter_Gourd has 200 .jpg file(s).
 test_Bottle_Gourd has 200 .jpg file(s).
 test_Brinjal has 200 .jpg file(s).
 test_Broccoli has 200 .jpg file(s).
 test_Cabbage has 200 .jpg file(s).
 test_Capsicum has 200 .jpg file(s).
 test_Carrot has 200 .jpg file(s).
 test_Cauliflower has 200 .jpg file(s).
 test_Cucumber has 200 .jpg file(s).
 test_Papaya has 200 .jpg file(s).
 test_Potato has 200 .jpg file(s).
 test_Pumpkin has 200 .jpg file(s).
 test_Radish has 200 .jpg file(s).
 test_Tomato has 200 .jpg file(s).



 validation_validation has 0 .jpg file(s).
 validation_Bean has 200 .jpg file(s).
 validation_Bitter_Gourd has 200 .jpg file(s).
 validation_Bottle_Gourd has 200 .jpg file(s).
 validation_Brinjal has 200 .jpg file(s).
 validation_Broccoli has 200 .jpg file(s).
 validation_Cabbage has 200 .jpg file(s).
 validation_Capsicum has 200 .jpg file(s).
 validation_Carrot has 200 .jpg file(s).
 validation_Cauliflower has 200 .jpg file(s).
 validation_Cucumber has 200 .jpg file(s).
 validation_Papaya has 200 .jpg file(s).
 validation_Potato has 200 .jpg file(s).
 validation_Pumpkin has 200 .jpg file(s).
 validation_Radish has 200 .jpg file(s).
 validation_Tomato has 200 .jpg file(s).



In [6]:
# Calculate percentages for each vegetable based on counts and dataset size
train_Bean_percentage = (train_Bean / train_size) * 100
train_Bitter_Gourd_percentage = (train_Bitter_Gourd / train_size) * 100
train_Bottle_Gourd_percentage = (train_Bottle_Gourd / train_size) * 100
train_Brinjal_percentage = (train_Brinjal / train_size) * 100
train_Broccoli_percentage = (train_Broccoli / train_size) * 100
train_Cabbage_percentage = (train_Cabbage / train_size) * 100
train_Capsicum_percentage = (train_Capsicum / train_size) * 100
train_Carrot_percentage = (train_Carrot / train_size) * 100
train_Cauliflower_percentage = (train_Cauliflower / train_size) * 100
train_Cucumber_percentage = (train_Cucumber / train_size) * 100
train_Papaya_percentage = (train_Papaya / train_size) * 100
train_Potato_percentage = (train_Potato / train_size) * 100
train_Pumpkin_percentage = (train_Pumpkin / train_size) * 100
train_Radish_percentage = (train_Radish / train_size) * 100
train_Tomato_percentage = (train_Tomato / train_size) * 100
In [7]:
# Calculate percentages for each vegetable based on counts and dataset size
test_Bean_percentage = (test_Bean / test_size) * 100
test_Bitter_Gourd_percentage = (test_Bitter_Gourd / test_size) * 100
test_Bottle_Gourd_percentage = (test_Bottle_Gourd / test_size) * 100
test_Brinjal_percentage = (test_Brinjal / test_size) * 100
test_Broccoli_percentage = (test_Broccoli / test_size) * 100
test_Cabbage_percentage = (test_Cabbage / test_size) * 100
test_Capsicum_percentage = (test_Capsicum / test_size) * 100
test_Carrot_percentage = (test_Carrot / test_size) * 100
test_Cauliflower_percentage = (test_Cauliflower / test_size) * 100
test_Cucumber_percentage = (test_Cucumber / test_size) * 100
test_Papaya_percentage = (test_Papaya / test_size) * 100
test_Potato_percentage = (test_Potato / test_size) * 100
test_Pumpkin_percentage = (test_Pumpkin / test_size) * 100
test_Radish_percentage = (test_Radish / test_size) * 100
test_Tomato_percentage = (test_Tomato / test_size) * 100
In [8]:
# Calculate percentages for each vegetable based on counts and dataset size
validation_Bean_percentage = (validation_Bean / validation_size) * 100
validation_Bitter_Gourd_percentage = (validation_Bitter_Gourd / validation_size) * 100
validation_Bottle_Gourd_percentage = (validation_Bottle_Gourd / validation_size) * 100
validation_Brinjal_percentage = (validation_Brinjal / validation_size) * 100
validation_Broccoli_percentage = (validation_Broccoli / validation_size) * 100
validation_Cabbage_percentage = (validation_Cabbage / validation_size) * 100
validation_Capsicum_percentage = (validation_Capsicum / validation_size) * 100
validation_Carrot_percentage = (validation_Carrot / validation_size) * 100
validation_Cauliflower_percentage = (validation_Cauliflower / validation_size) * 100
validation_Cucumber_percentage = (validation_Cucumber / validation_size) * 100
validation_Papaya_percentage = (validation_Papaya / validation_size) * 100
validation_Potato_percentage = (validation_Potato / validation_size) * 100
validation_Pumpkin_percentage = (validation_Pumpkin / validation_size) * 100
validation_Radish_percentage = (validation_Radish / validation_size) * 100
validation_Tomato_percentage = (validation_Tomato / validation_size) * 100
In [9]:
# Create subplots for the pie charts
fig, axs = plt.subplots(1, 3, figsize=(15, 5))

# Define labels and values for the vegetable pie charts
vegetable_labels = ['Bean', 'Bitter Gourd', 'Bottle Gourd', 'Brinjal', 'Broccoli', 'Cabbage', 'Capsicum', 'Carrot',
                    'Cauliflower', 'Cucumber', 'Papaya', 'Potato', 'Pumpkin', 'Radish', 'Tomato']

train_vegetable_sizes = [train_Bean_percentage, train_Bitter_Gourd_percentage, train_Bottle_Gourd_percentage,
                          train_Brinjal_percentage, train_Broccoli_percentage, train_Cabbage_percentage,
                          train_Capsicum_percentage, train_Carrot_percentage, train_Cauliflower_percentage,
                          train_Cucumber_percentage, train_Papaya_percentage, train_Potato_percentage,
                          train_Pumpkin_percentage, train_Radish_percentage, train_Tomato_percentage]

test_vegetable_sizes = [test_Bean_percentage, test_Bitter_Gourd_percentage, test_Bottle_Gourd_percentage,
                         test_Brinjal_percentage, test_Broccoli_percentage, test_Cabbage_percentage,
                         test_Capsicum_percentage, test_Carrot_percentage, test_Cauliflower_percentage,
                         test_Cucumber_percentage, test_Papaya_percentage, test_Potato_percentage,
                         test_Pumpkin_percentage, test_Radish_percentage, test_Tomato_percentage]

validation_vegetable_sizes = [validation_Bean_percentage, validation_Bitter_Gourd_percentage,
                              validation_Bottle_Gourd_percentage, validation_Brinjal_percentage,
                              validation_Broccoli_percentage, validation_Cabbage_percentage,
                              validation_Capsicum_percentage, validation_Carrot_percentage,
                              validation_Cauliflower_percentage, validation_Cucumber_percentage,
                              validation_Papaya_percentage, validation_Potato_percentage,
                              validation_Pumpkin_percentage, validation_Radish_percentage,
                              validation_Tomato_percentage]

# Plot the vegetable pie charts for train, test, and validation datasets
axs[0].pie(train_vegetable_sizes, labels=vegetable_labels, autopct='%1.0f%%', startangle=90, shadow=True)
axs[0].set_title(f'Train Dataset Vegetable Percentage\nSize: {train_size}', fontsize=12)

axs[1].pie(test_vegetable_sizes, labels=vegetable_labels, autopct='%1.0f%%', startangle=90, shadow=True)
axs[1].set_title(f'Test Dataset Vegetable Percentage\nSize: {test_size}', fontsize=12)

axs[2].pie(validation_vegetable_sizes, labels=vegetable_labels, autopct='%1.0f%%', startangle=90, shadow=True)
axs[2].set_title(f'Validation Dataset Vegetable Percentage\nSize: {validation_size}', fontsize=12)

# Add a common suptitle
plt.suptitle('Dataset and Vegetable Percentage', fontsize=16, fontweight='bold')

# Adjust layout and display the plot
plt.tight_layout()
plt.show()
No description has been provided for this image

Feature engineering and data augmentation¶


Feature engineering is the process of creating new features, modifying existing features or removing duplicated features in a dataset to improve the performance of machine learning models.¶

  • Removing repeated images for this case as there are file with '- Copy' should be removed because

    1. Rendundency Reduction: Removing duplicate data points reduces redundancy in the dataset. Redundant data does not provide additional information to the model but can increase computational overhead during training.
    2. Data Quality Enhancement: Feature engineering aims to improve the quality of input data for machine learning models. Removing duplicate data contributes to data cleaning, which is an essential aspect of data quality.
    3. Model Performance: Ultimately, the goal of feature engineering, including duplicate removal, is to enhance the model's performance. By providing the model with cleaner, non-redundant data, it improve the model's ability to learn meaningful patterns and make accurate predictions on new data.

Data augmentation enhances training data diversity for machine learning models¶

which can be by adding more images from pre-existing images¶

Examples are:

  • Grayscaling: Creating new modified gray versions of photos from the original photos

which use Red, Green, Blue (RGB) color schemes.

Greyscaling uses these formulas below:

$$ \text{Average Method: } \text{gray} = \frac{R + G + B}{3} $$$$ \text{Luminosity Method: } \text{gray} = 0.21R + 0.72G + 0.07B $$$$ \text{Lightness Method: } \text{gray} = \frac{\max(R, G, B) + \min(R, G, B)}{2} $$
  • Flipping and rotating the image is also another form of data augmentation as it make more new images out of previous images
  • Resizing pixel size of images

Feature Engineering¶


Removing duplicated images from dataset and storing them in another folder called duplicates image.png

In [10]:
import os

# Define the directory paths for train, test, and validation sets
train_dir = './Cleaned Dataset for CA1 part A - AY2425S1/train'
test_dir = './Cleaned Dataset for CA1 part A - AY2425S1/test'
validation_dir = './Cleaned Dataset for CA1 part A - AY2425S1/validation'

In train, Bean dataset, there are Carrot images so I will remove Carrot images from Bean dataset

as carrots and beans may have visual similarities or shared characteristics, such as color or shape.

Keeping carrot images in the bean dataset could confuse the model, leading to misclassifications or reduced accuracy in bean recognition tasks.

Before removing Carrot images from Bean file in train¶

image.png

Remove selected photos¶

image-2.png

After removing Carrot images from Bean file in train¶

image-3.png

Data Augmentation¶


In [11]:
def process_images(base_directory, output_directory, directories, resize_dimensions=(37,37)):
    def count_jpg_files_in_directory(directory):
        # Join the directory path with the file pattern
        file_pattern = os.path.join(directory, '*.jpg')
        
        # Use glob to find files matching the pattern
        jpg_files = glob.glob(file_pattern)
        
        # Count the number of JPG files found
        return len(jpg_files)

    def convert_and_resize_image(image_path, output_path, dimensions):
        # Open the image using PIL
        img = Image.open(image_path)
        
        # Convert the image to grayscale
        img_gray = img.convert('L')
        
        # Resize the image to the specified dimensions
        img_resized = img_gray.resize(dimensions)
        
        # Save the resized image
        img_resized.save(output_path)
    
    os.makedirs(output_directory, exist_ok=True)
    
    for directory in directories:
        directory_path = os.path.join(base_directory, directory)
        num_jpg_files = count_jpg_files_in_directory(directory_path)
        print(f"Processing {num_jpg_files} .jpg files in {directory}")
        
        output_vegetable_directory = os.path.join(output_directory, directory)
        os.makedirs(output_vegetable_directory, exist_ok=True)
        
        for jpg_file in glob.glob(os.path.join(directory_path, '*.jpg')):
            filename = os.path.splitext(os.path.basename(jpg_file))[0]
            output_path = os.path.join(output_vegetable_directory, f"{filename}.jpg")
            
            convert_and_resize_image(jpg_file, output_path, resize_dimensions)
    
    print(f"All images processed and saved in {output_directory} directory.\n\n")

directories = ['Bean', 'Bitter_Gourd', 'Bottle_Gourd', 'Brinjal', 'Broccoli',
                'Cabbage', 'Capsicum', 'Carrot', 'Cauliflower', 'Cucumber',
                  'Papaya', 'Potato', 'Pumpkin', 'Radish', 'Tomato']


base_directory_train = './Cleaned Dataset for CA1 part A - AY2425S1/train'
directory_train='./Cleaned Dataset for CA1 part A - AY2425S1/train37'
process_images(base_directory_train, directory_train, directories)

base_directory_test = './Cleaned Dataset for CA1 part A - AY2425S1/test'
directory_test='./Cleaned Dataset for CA1 part A - AY2425S1/test37'
process_images(base_directory_test, directory_test, directories)

base_directory_validation = './Cleaned Dataset for CA1 part A - AY2425S1/validation'
directory_validation='./Cleaned Dataset for CA1 part A - AY2425S1/validation37'
process_images(base_directory_validation, directory_validation, directories)
Processing 780 .jpg files in Bean
Processing 720 .jpg files in Bitter_Gourd
Processing 441 .jpg files in Bottle_Gourd
Processing 868 .jpg files in Brinjal
Processing 750 .jpg files in Broccoli
Processing 503 .jpg files in Cabbage
Processing 351 .jpg files in Capsicum
Processing 256 .jpg files in Carrot
Processing 587 .jpg files in Cauliflower
Processing 812 .jpg files in Cucumber
Processing 566 .jpg files in Papaya
Processing 377 .jpg files in Potato
Processing 814 .jpg files in Pumpkin
Processing 248 .jpg files in Radish
Processing 955 .jpg files in Tomato
All images processed and saved in ./Cleaned Dataset for CA1 part A - AY2425S1/train37 directory.


Processing 200 .jpg files in Bean
Processing 200 .jpg files in Bitter_Gourd
Processing 200 .jpg files in Bottle_Gourd
Processing 200 .jpg files in Brinjal
Processing 200 .jpg files in Broccoli
Processing 200 .jpg files in Cabbage
Processing 200 .jpg files in Capsicum
Processing 200 .jpg files in Carrot
Processing 200 .jpg files in Cauliflower
Processing 200 .jpg files in Cucumber
Processing 200 .jpg files in Papaya
Processing 200 .jpg files in Potato
Processing 200 .jpg files in Pumpkin
Processing 200 .jpg files in Radish
Processing 200 .jpg files in Tomato
All images processed and saved in ./Cleaned Dataset for CA1 part A - AY2425S1/test37 directory.


Processing 200 .jpg files in Bean
Processing 200 .jpg files in Bitter_Gourd
Processing 200 .jpg files in Bottle_Gourd
Processing 200 .jpg files in Brinjal
Processing 200 .jpg files in Broccoli
Processing 200 .jpg files in Cabbage
Processing 200 .jpg files in Capsicum
Processing 200 .jpg files in Carrot
Processing 200 .jpg files in Cauliflower
Processing 200 .jpg files in Cucumber
Processing 200 .jpg files in Papaya
Processing 200 .jpg files in Potato
Processing 200 .jpg files in Pumpkin
Processing 200 .jpg files in Radish
Processing 200 .jpg files in Tomato
All images processed and saved in ./Cleaned Dataset for CA1 part A - AY2425S1/validation37 directory.


In [12]:
import os
import glob
from PIL import Image

def process_images(base_directory, output_directory, directories, resize_dimensions=(131,131)):
    def count_jpg_files_in_directory(directory):
        # Join the directory path with the file pattern
        file_pattern = os.path.join(directory, '*.jpg')
        
        # Use glob to find files matching the pattern
        jpg_files = glob.glob(file_pattern)
        
        # Count the number of JPG files found
        return len(jpg_files)

    def convert_and_resize_image(image_path, output_path, dimensions):
        # Open the image using PIL
        img = Image.open(image_path)
        
        # Convert the image to grayscale
        img_gray = img.convert('L')
        
        # Resize the image to the specified dimensions
        img_resized = img_gray.resize(dimensions)
        
        # Save the resized image
        img_resized.save(output_path)
    
    os.makedirs(output_directory, exist_ok=True)
    
    for directory in directories:
        directory_path = os.path.join(base_directory, directory)
        num_jpg_files = count_jpg_files_in_directory(directory_path)
        print(f"Processing {num_jpg_files} .jpg files in {directory}")
        
        output_vegetable_directory = os.path.join(output_directory, directory)
        os.makedirs(output_vegetable_directory, exist_ok=True)
        
        for jpg_file in glob.glob(os.path.join(directory_path, '*.jpg')):
            filename = os.path.splitext(os.path.basename(jpg_file))[0]
            output_path = os.path.join(output_vegetable_directory, f"{filename}.jpg")
            
            convert_and_resize_image(jpg_file, output_path, resize_dimensions)
    
    print(f"All images processed and saved in {output_directory} directory.\n\n")

directories = ['Bean', 'Bitter_Gourd', 'Bottle_Gourd', 'Brinjal', 'Broccoli', 'Cabbage', 'Capsicum', 'Carrot', 'Cauliflower', 'Cucumber', 'Papaya', 'Potato', 'Pumpkin', 'Radish', 'Tomato']
base_directory_train = './Cleaned Dataset for CA1 part A - AY2425S1/train'
directory_train='./Cleaned Dataset for CA1 part A - AY2425S1/train131'
process_images(base_directory_train, directory_train, directories)

base_directory_test = './Cleaned Dataset for CA1 part A - AY2425S1/test'
directory_test='./Cleaned Dataset for CA1 part A - AY2425S1/test131'
process_images(base_directory_test, directory_test, directories)

base_directory_validation = './Cleaned Dataset for CA1 part A - AY2425S1/validation'
directory_validation='./Cleaned Dataset for CA1 part A - AY2425S1/validation131'
process_images(base_directory_validation, directory_validation, directories)
Processing 780 .jpg files in Bean
Processing 720 .jpg files in Bitter_Gourd
Processing 441 .jpg files in Bottle_Gourd
Processing 868 .jpg files in Brinjal
Processing 750 .jpg files in Broccoli
Processing 503 .jpg files in Cabbage
Processing 351 .jpg files in Capsicum
Processing 256 .jpg files in Carrot
Processing 587 .jpg files in Cauliflower
Processing 812 .jpg files in Cucumber
Processing 566 .jpg files in Papaya
Processing 377 .jpg files in Potato
Processing 814 .jpg files in Pumpkin
Processing 248 .jpg files in Radish
Processing 955 .jpg files in Tomato
All images processed and saved in ./Cleaned Dataset for CA1 part A - AY2425S1/train131 directory.


Processing 200 .jpg files in Bean
Processing 200 .jpg files in Bitter_Gourd
Processing 200 .jpg files in Bottle_Gourd
Processing 200 .jpg files in Brinjal
Processing 200 .jpg files in Broccoli
Processing 200 .jpg files in Cabbage
Processing 200 .jpg files in Capsicum
Processing 200 .jpg files in Carrot
Processing 200 .jpg files in Cauliflower
Processing 200 .jpg files in Cucumber
Processing 200 .jpg files in Papaya
Processing 200 .jpg files in Potato
Processing 200 .jpg files in Pumpkin
Processing 200 .jpg files in Radish
Processing 200 .jpg files in Tomato
All images processed and saved in ./Cleaned Dataset for CA1 part A - AY2425S1/test131 directory.


Processing 200 .jpg files in Bean
Processing 200 .jpg files in Bitter_Gourd
Processing 200 .jpg files in Bottle_Gourd
Processing 200 .jpg files in Brinjal
Processing 200 .jpg files in Broccoli
Processing 200 .jpg files in Cabbage
Processing 200 .jpg files in Capsicum
Processing 200 .jpg files in Carrot
Processing 200 .jpg files in Cauliflower
Processing 200 .jpg files in Cucumber
Processing 200 .jpg files in Papaya
Processing 200 .jpg files in Potato
Processing 200 .jpg files in Pumpkin
Processing 200 .jpg files in Radish
Processing 200 .jpg files in Tomato
All images processed and saved in ./Cleaned Dataset for CA1 part A - AY2425S1/validation131 directory.


In [13]:
# Define the resized and grayscale directories for train, test, and validation datasets
base_directory = Path('./Cleaned Dataset for CA1 part A - AY2425S1')

resized_grayscale_train_37_dir = base_directory / 'train37'
resized_grayscale_test_37_dir = base_directory / 'test37'
resized_grayscale_validation_37_dir = base_directory / 'validation37'

resized_grayscale_train_131_dir = base_directory / 'train131'
resized_grayscale_test_131_dir = base_directory / 'test131'
resized_grayscale_validation_131_dir = base_directory / 'validation131'

vegetable_types = [
    'Bean', 'Bitter_Gourd', 'Bottle_Gourd', 'Brinjal', 'Broccoli', 'Cabbage',
    'Capsicum', 'Carrot', 'Cauliflower', 'Cucumber', 'Papaya', 'Potato', 
    'Pumpkin', 'Radish', 'Tomato'
]

def plot_vegetable_images(directory, title, image_size):
    fig, axs = plt.subplots(5, 3, figsize=(15, 15))
    fig.suptitle(f'{title} Images ({image_size[0]}x{image_size[1]})', fontsize=16)

    for idx, veg_type in enumerate(vegetable_types):
        image_path = next(directory.glob(f'{veg_type}/*.jpg'), None)  # Get the first image for each vegetable type
        if image_path:
            image = Image.open(str(image_path))
            ax = axs[idx // 3, idx % 3]
            ax.imshow(image, cmap='gray')
            ax.axis('off')  # Hide axes
            ax.set_title(veg_type)

    plt.tight_layout(rect=[0, 0.03, 1, 0.95])
    plt.show()

# Plot images for each vegetable type in train, test, and validation sets for both image sizes
plot_vegetable_images(resized_grayscale_train_37_dir, 'Train', (37, 37))
plot_vegetable_images(resized_grayscale_test_37_dir, 'Test', (37, 37))
plot_vegetable_images(resized_grayscale_validation_37_dir, 'Validation', (37, 37))

plot_vegetable_images(resized_grayscale_train_131_dir, 'Train', (131, 131))
plot_vegetable_images(resized_grayscale_test_131_dir, 'Test', (131, 131))
plot_vegetable_images(resized_grayscale_validation_131_dir, 'Validation', (131, 131))
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

Raw Data for Baseline Model¶


Cited from: https://www.kaggle.com/code/rejpalcz/datagenerator-for-fast-data-loading

In [4]:
# Set a fixed seed value
seed = 88
tf.random.set_seed(seed)

# Set the paths to the directories where the images are stored
train_directory = './Cleaned Dataset for CA1 part A - AY2425S1/train37'
validation_directory = './Cleaned Dataset for CA1 part A - AY2425S1/validation37'
test_directory = './Cleaned Dataset for CA1 part A - AY2425S1/test37'

# Set up the ImageDataGenerator for training data
train_datagen = ImageDataGenerator(
    rescale=1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True
)

# Set up the ImageDataGenerator for validation and test data
datagen = ImageDataGenerator(rescale=1./255)

# Set up the generators to read images from the directories
train37 = train_datagen.flow_from_directory(
    directory=train_directory,
    target_size=(37, 37),
    color_mode='grayscale',
    batch_size=32,
    class_mode='categorical',
    shuffle=True,
    seed=seed
)

val37 = datagen.flow_from_directory(
    directory=validation_directory,
    target_size=(37, 37),
    color_mode='grayscale',
    batch_size=32,
    class_mode='categorical',
    shuffle=False,  
    seed=seed
)

test37 = datagen.flow_from_directory(
    directory=test_directory,
    target_size=(37, 37),
    color_mode='grayscale',
    batch_size=32,
    class_mode='categorical',
    shuffle=False,  
    seed=seed
)


# Set the paths to the directories where the images are stored
train_directory = './Cleaned Dataset for CA1 part A - AY2425S1/train131'
validation_directory = './Cleaned Dataset for CA1 part A - AY2425S1/validation131'
test_directory = './Cleaned Dataset for CA1 part A - AY2425S1/test131'


# Set up the ImageDataGenerator for validation and test data
datagen = ImageDataGenerator(rescale=1./255)

# Set up the generators to read images from the directories
train131 = train_datagen.flow_from_directory(
    directory=train_directory,
    target_size=(131, 131),
    color_mode='grayscale',
    batch_size=32,
    class_mode='categorical',
    shuffle=True,
    seed=seed
)

val131 = datagen.flow_from_directory(
    directory=validation_directory,
    target_size=(131, 131),
    color_mode='grayscale',
    batch_size=32,
    class_mode='categorical',
    shuffle=False,  
    seed=seed
)

test131 = datagen.flow_from_directory(
    directory=test_directory,
    target_size=(131, 131),
    color_mode='grayscale',
    batch_size=32,
    class_mode='categorical',
    shuffle=False,  
    seed=seed
)
Found 9028 images belonging to 15 classes.
Found 3000 images belonging to 15 classes.
Found 3000 images belonging to 15 classes.
Found 9028 images belonging to 15 classes.
Found 3000 images belonging to 15 classes.
Found 3000 images belonging to 15 classes.
In [6]:
# Function to plot confusion matrix
def plot_confusion_matrix(cm, class_names):
    plt.figure(figsize=(10, 7))
    sns.heatmap(cm, annot=True, fmt="d", cmap='Blues', xticklabels=class_labels, yticklabels=class_labels)
    plt.title('Confusion Matrix')
    plt.ylabel('True Label')
    plt.xlabel('Predicted Label')
    plt.show()

Weight balancing¶


The purpose of weight balancing is to give more importance (higher weight) to minority classes (smaller datasets) during model training.

In [5]:
train_label = []
num_batches = 100  
for i, (img, label) in enumerate(train37):
    train_label.extend(tf.argmax(label, axis=1).numpy())
    if i == num_batches:
        break


train_label = np.array(train_label)
class_names = np.unique(train_label)

class_weights37 = compute_class_weight(class_weight='balanced', classes=class_names, y=train_label)
class_weights37 = dict(zip(class_names, class_weights37))
print(class_weights37)

# Cited from: https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html
{0: 0.8584329349269588, 1: 0.8130817610062893, 2: 1.5613526570048308, 3: 0.6609406952965236, 4: 0.795079950799508, 5: 1.1970370370370371, 6: 1.766120218579235, 7: 2.292198581560284, 8: 0.9705705705705706, 9: 0.7182222222222222, 10: 1.0937394247038916, 11: 1.6447837150127227, 12: 0.7328798185941043, 13: 2.2680701754385963, 14: 0.6227360308285164}
In [6]:
train_label = []
num_batches = 100  
for i, (img, label) in enumerate(train131):
    train_label.extend(tf.argmax(label, axis=1).numpy())
    if i == num_batches:
        break


train_label = np.array(train_label)
class_names = np.unique(train_label)

class_weights131 = compute_class_weight(class_weight='balanced', classes=class_names, y=train_label)
class_weights131 = dict(zip(class_names, class_weights131))
print(class_weights131)
{0: 0.8584329349269588, 1: 0.8130817610062893, 2: 1.5613526570048308, 3: 0.6609406952965236, 4: 0.795079950799508, 5: 1.1970370370370371, 6: 1.766120218579235, 7: 2.292198581560284, 8: 0.9705705705705706, 9: 0.7182222222222222, 10: 1.0937394247038916, 11: 1.6447837150127227, 12: 0.7328798185941043, 13: 2.2680701754385963, 14: 0.6227360308285164}
In [7]:
num_classes=15

Model Creation¶


A baseline CNN model serves several important purposes in machine learning and deep learning projects:¶
  1. Performance Benchmarking: Establishes a baseline to compare more complex models and assess improvements.
  2. Understanding Complexity: Helps determine if a simple or complex model is needed for the problem.
  3. Resource Efficiency: Baseline models are quicker and require fewer resources, ideal for initial exploration.
  4. Debugging and Testing: Ensures the infrastructure works correctly before using more complex models.

Cited from: CHATGPT


Baseline Model¶


My Models consists of:

  • 1 input conv2D layer
  • 1 conv2D layers
  • 2 Maxpooling layers
  • 1 Flatten layer
  • 2 Dense layers
  • 1 Dropout layer
In [19]:
# 37x37 Model

# Define the CNN architecture (LAB3)
model = Sequential([
    Conv2D(32, (3, 3), activation='relu', input_shape=(37, 37, 1)),
    MaxPooling2D(2, 2),
    Conv2D(64, (3, 3), activation='relu'),
    MaxPooling2D(2, 2),
    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.5),  # Dropout layer to reduce overfitting
    Dense(num_classes, activation='softmax')  # Output layer with 15 units for each class
])

# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Fit the model using the generators
history = model.fit(
    train37,
    epochs=10,
    validation_data=val37,
    class_weight=class_weights37,
    verbose=1
)

plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (37x37)')
plt.plot(history.history['val_accuracy'], label='Validation (37x37)')

plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

model.summary()

# Predict the output on the test set
predictions = model.predict(test37, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test37.classes

# Get the label to class mapping from the generator
class_labels = list(test37.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/10
283/283 [==============================] - 26s 39ms/step - loss: 2.6126 - accuracy: 0.1350 - val_loss: 2.3127 - val_accuracy: 0.2783
Epoch 2/10
283/283 [==============================] - 7s 23ms/step - loss: 2.3409 - accuracy: 0.2497 - val_loss: 2.0300 - val_accuracy: 0.3720
Epoch 3/10
283/283 [==============================] - 6s 21ms/step - loss: 2.1863 - accuracy: 0.2992 - val_loss: 2.0251 - val_accuracy: 0.3347
Epoch 4/10
283/283 [==============================] - 6s 22ms/step - loss: 2.0465 - accuracy: 0.3309 - val_loss: 1.7381 - val_accuracy: 0.4390
Epoch 5/10
283/283 [==============================] - 6s 22ms/step - loss: 1.9719 - accuracy: 0.3580 - val_loss: 1.6298 - val_accuracy: 0.4783
Epoch 6/10
283/283 [==============================] - 6s 21ms/step - loss: 1.8715 - accuracy: 0.3816 - val_loss: 1.5912 - val_accuracy: 0.4933
Epoch 7/10
283/283 [==============================] - 6s 22ms/step - loss: 1.8168 - accuracy: 0.4047 - val_loss: 1.5375 - val_accuracy: 0.5103
Epoch 8/10
283/283 [==============================] - 6s 21ms/step - loss: 1.7595 - accuracy: 0.4217 - val_loss: 1.7316 - val_accuracy: 0.4380
Epoch 9/10
283/283 [==============================] - 6s 21ms/step - loss: 1.7315 - accuracy: 0.4273 - val_loss: 1.4267 - val_accuracy: 0.5420
Epoch 10/10
283/283 [==============================] - 6s 20ms/step - loss: 1.6534 - accuracy: 0.4548 - val_loss: 1.3614 - val_accuracy: 0.5573
No description has been provided for this image
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 35, 35, 32)        320       
                                                                 
 max_pooling2d (MaxPooling2D  (None, 17, 17, 32)       0         
 )                                                               
                                                                 
 conv2d_1 (Conv2D)           (None, 15, 15, 64)        18496     
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 7, 7, 64)         0         
 2D)                                                             
                                                                 
 flatten (Flatten)           (None, 3136)              0         
                                                                 
 dense (Dense)               (None, 128)               401536    
                                                                 
 dropout (Dropout)           (None, 128)               0         
                                                                 
 dense_1 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 422,287
Trainable params: 422,287
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 1s 12ms/step
No description has been provided for this image
In [20]:
# 131x131 Model

# Define the CNN architecture (LAB3)
model = Sequential([
    Conv2D(32, (3, 3), activation='relu', input_shape=(131, 131, 1)),
    MaxPooling2D(2, 2),
    Conv2D(64, (3, 3), activation='relu'),
    MaxPooling2D(2, 2),
    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.5),  # Dropout layer to reduce overfitting
    Dense(num_classes, activation='softmax')  # Output layer with 15 units for each class
])

# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Fit the model using the generators
history = model.fit(
    train131,
    epochs=10,
    validation_data=val131,
    class_weight=class_weights131,
    verbose=1
)

plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (131x131)')
plt.plot(history.history['val_accuracy'], label='Validation (131x131)')

plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

model.summary()

# Predict the output on the test set
predictions = model.predict(test131, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test131.classes

# Get the label to class mapping from the generator
class_labels = list(test131.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/10
283/283 [==============================] - 16s 51ms/step - loss: 2.4679 - accuracy: 0.1952 - val_loss: 1.9600 - val_accuracy: 0.3820
Epoch 2/10
283/283 [==============================] - 12s 43ms/step - loss: 1.9952 - accuracy: 0.3642 - val_loss: 1.6420 - val_accuracy: 0.4693
Epoch 3/10
283/283 [==============================] - 12s 43ms/step - loss: 1.7555 - accuracy: 0.4349 - val_loss: 1.2978 - val_accuracy: 0.5983
Epoch 4/10
283/283 [==============================] - 13s 46ms/step - loss: 1.5556 - accuracy: 0.4927 - val_loss: 1.2174 - val_accuracy: 0.6143
Epoch 5/10
283/283 [==============================] - 12s 44ms/step - loss: 1.4077 - accuracy: 0.5411 - val_loss: 1.0436 - val_accuracy: 0.6743
Epoch 6/10
283/283 [==============================] - 12s 44ms/step - loss: 1.2936 - accuracy: 0.5834 - val_loss: 0.9563 - val_accuracy: 0.7007
Epoch 7/10
283/283 [==============================] - 12s 44ms/step - loss: 1.1847 - accuracy: 0.6073 - val_loss: 0.9378 - val_accuracy: 0.7000
Epoch 8/10
283/283 [==============================] - 12s 44ms/step - loss: 1.1072 - accuracy: 0.6325 - val_loss: 0.7535 - val_accuracy: 0.7653
Epoch 9/10
283/283 [==============================] - 13s 44ms/step - loss: 1.0658 - accuracy: 0.6473 - val_loss: 0.7140 - val_accuracy: 0.7840
Epoch 10/10
283/283 [==============================] - 12s 44ms/step - loss: 1.0079 - accuracy: 0.6704 - val_loss: 0.8533 - val_accuracy: 0.7367
No description has been provided for this image
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_2 (Conv2D)           (None, 129, 129, 32)      320       
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 64, 64, 32)       0         
 2D)                                                             
                                                                 
 conv2d_3 (Conv2D)           (None, 62, 62, 64)        18496     
                                                                 
 max_pooling2d_3 (MaxPooling  (None, 31, 31, 64)       0         
 2D)                                                             
                                                                 
 flatten_1 (Flatten)         (None, 61504)             0         
                                                                 
 dense_2 (Dense)             (None, 128)               7872640   
                                                                 
 dropout_1 (Dropout)         (None, 128)               0         
                                                                 
 dense_3 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 7,893,391
Trainable params: 7,893,391
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 1s 15ms/step
No description has been provided for this image

Model Improvement¶


The Improved Models consists of:

  • 1 input conv2D layer
  • 2 conv2D layers
  • 3 Maxpooling layers
  • 3 Dense layers
  • 2 Dropout layer
  • 1 Flatten layer

and increasing Epochs from 10 ->100 because

  1. Better Generalization:

     More epochs allow the model to learn from data more thoroughly, identifying patterns and reducing overfitting.
    
  2. Weight Refinement:

     Each epoch refines the model's weights and biases, leading to a more accurate model.
    
  3. Complex Pattern Learning:

     Complex patterns require more epochs for the model to grasp, and longer training helps it learn intricate relationships.
    
  4. Convergence:

     Longer training helps the model converge closer to the best solution, enhancing its performance.
    
In [21]:
# 37x37 Model

# fix random seed for reproducibility (LAB3)
seed = 88
np.random.seed(seed)

# create model (LAB3)
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(37, 37, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))
model.add(Flatten())

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
history_baseline=model.fit(train37, validation_data=(val37), epochs=30, batch_size=200, verbose=1, class_weight=class_weights37)

 # Final evaluation of the model
scores = model.evaluate(test37, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))


plt.figure(figsize=(10, 6))
plt.plot(history_baseline.history['accuracy'], label='Train (37x37)')
plt.plot(history_baseline.history['val_accuracy'], label='Validation (37x37)')

plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

model.summary()

# Predict the output on the test set
predictions = model.predict(test37, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test37.classes

# Get the label to class mapping from the generator
class_labels = list(test37.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/30
283/283 [==============================] - 7s 22ms/step - loss: 2.6524 - accuracy: 0.1009 - val_loss: 2.4019 - val_accuracy: 0.2367
Epoch 2/30
283/283 [==============================] - 6s 22ms/step - loss: 2.3787 - accuracy: 0.2194 - val_loss: 2.1144 - val_accuracy: 0.3193
Epoch 3/30
283/283 [==============================] - 6s 22ms/step - loss: 2.1736 - accuracy: 0.2891 - val_loss: 1.9762 - val_accuracy: 0.3463
Epoch 4/30
283/283 [==============================] - 6s 21ms/step - loss: 1.9778 - accuracy: 0.3621 - val_loss: 1.6556 - val_accuracy: 0.4743
Epoch 5/30
283/283 [==============================] - 6s 21ms/step - loss: 1.8471 - accuracy: 0.3971 - val_loss: 1.5567 - val_accuracy: 0.4977
Epoch 6/30
283/283 [==============================] - 6s 22ms/step - loss: 1.7285 - accuracy: 0.4375 - val_loss: 1.4388 - val_accuracy: 0.5470
Epoch 7/30
283/283 [==============================] - 6s 21ms/step - loss: 1.6194 - accuracy: 0.4643 - val_loss: 1.4049 - val_accuracy: 0.5667
Epoch 8/30
283/283 [==============================] - 6s 21ms/step - loss: 1.5259 - accuracy: 0.4996 - val_loss: 1.1888 - val_accuracy: 0.6383
Epoch 9/30
283/283 [==============================] - 6s 21ms/step - loss: 1.4400 - accuracy: 0.5288 - val_loss: 1.2004 - val_accuracy: 0.6220
Epoch 10/30
283/283 [==============================] - 6s 22ms/step - loss: 1.3601 - accuracy: 0.5487 - val_loss: 1.1196 - val_accuracy: 0.6570
Epoch 11/30
283/283 [==============================] - 6s 22ms/step - loss: 1.3006 - accuracy: 0.5709 - val_loss: 1.0589 - val_accuracy: 0.6677
Epoch 12/30
283/283 [==============================] - 6s 21ms/step - loss: 1.2152 - accuracy: 0.5986 - val_loss: 0.9023 - val_accuracy: 0.7250
Epoch 13/30
283/283 [==============================] - 6s 22ms/step - loss: 1.1886 - accuracy: 0.6018 - val_loss: 0.8772 - val_accuracy: 0.7350
Epoch 14/30
283/283 [==============================] - 6s 21ms/step - loss: 1.1457 - accuracy: 0.6128 - val_loss: 0.8676 - val_accuracy: 0.7307
Epoch 15/30
283/283 [==============================] - 6s 21ms/step - loss: 1.0984 - accuracy: 0.6319 - val_loss: 0.8466 - val_accuracy: 0.7383
Epoch 16/30
283/283 [==============================] - 6s 22ms/step - loss: 1.0851 - accuracy: 0.6382 - val_loss: 0.7835 - val_accuracy: 0.7557
Epoch 17/30
283/283 [==============================] - 6s 21ms/step - loss: 1.0518 - accuracy: 0.6523 - val_loss: 0.7741 - val_accuracy: 0.7607
Epoch 18/30
283/283 [==============================] - 6s 21ms/step - loss: 1.0164 - accuracy: 0.6589 - val_loss: 0.8490 - val_accuracy: 0.7360
Epoch 19/30
283/283 [==============================] - 6s 21ms/step - loss: 0.9975 - accuracy: 0.6605 - val_loss: 0.8346 - val_accuracy: 0.7383
Epoch 20/30
283/283 [==============================] - 6s 21ms/step - loss: 0.9745 - accuracy: 0.6696 - val_loss: 0.6844 - val_accuracy: 0.7933
Epoch 21/30
283/283 [==============================] - 6s 21ms/step - loss: 0.9453 - accuracy: 0.6757 - val_loss: 0.6977 - val_accuracy: 0.7817
Epoch 22/30
283/283 [==============================] - 6s 21ms/step - loss: 0.9064 - accuracy: 0.6903 - val_loss: 0.6825 - val_accuracy: 0.7913
Epoch 23/30
283/283 [==============================] - 6s 22ms/step - loss: 0.8885 - accuracy: 0.6955 - val_loss: 0.6933 - val_accuracy: 0.7777
Epoch 24/30
283/283 [==============================] - 6s 23ms/step - loss: 0.8753 - accuracy: 0.7006 - val_loss: 0.6162 - val_accuracy: 0.8110
Epoch 25/30
283/283 [==============================] - 6s 22ms/step - loss: 0.8305 - accuracy: 0.7182 - val_loss: 0.7765 - val_accuracy: 0.7673
Epoch 26/30
283/283 [==============================] - 6s 21ms/step - loss: 0.8341 - accuracy: 0.7160 - val_loss: 0.6300 - val_accuracy: 0.8077
Epoch 27/30
283/283 [==============================] - 6s 21ms/step - loss: 0.7998 - accuracy: 0.7223 - val_loss: 0.6615 - val_accuracy: 0.7980
Epoch 28/30
283/283 [==============================] - 6s 21ms/step - loss: 0.7981 - accuracy: 0.7291 - val_loss: 0.5980 - val_accuracy: 0.8097
Epoch 29/30
283/283 [==============================] - 6s 21ms/step - loss: 0.7738 - accuracy: 0.7360 - val_loss: 0.5595 - val_accuracy: 0.8340
Epoch 30/30
283/283 [==============================] - 6s 21ms/step - loss: 0.7823 - accuracy: 0.7343 - val_loss: 0.6141 - val_accuracy: 0.8147
CNN Error: 19.40%
No description has been provided for this image
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_4 (Conv2D)           (None, 33, 33, 64)        1664      
                                                                 
 max_pooling2d_4 (MaxPooling  (None, 16, 16, 64)       0         
 2D)                                                             
                                                                 
 conv2d_5 (Conv2D)           (None, 14, 14, 128)       73856     
                                                                 
 max_pooling2d_5 (MaxPooling  (None, 7, 7, 128)        0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 7, 7, 128)         0         
                                                                 
 flatten_2 (Flatten)         (None, 6272)              0         
                                                                 
 dense_4 (Dense)             (None, 256)               1605888   
                                                                 
 dropout_3 (Dropout)         (None, 256)               0         
                                                                 
 dense_5 (Dense)             (None, 128)               32896     
                                                                 
 dense_6 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 1,716,239
Trainable params: 1,716,239
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 1s 12ms/step
No description has been provided for this image
In [22]:
# 131x131 Model

# fix random seed for reproducibility (LAB3)
seed = 88
np.random.seed(seed)

# create model (LAB3)
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(131, 131, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))
model.add(Flatten())

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
history_baseline=model.fit(train131, validation_data=(val131), epochs=30, batch_size=200, verbose=1, class_weight=class_weights131)

 # Final evaluation of the model
scores = model.evaluate(test131, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))


plt.figure(figsize=(10, 6))
plt.plot(history_baseline.history['accuracy'], label='Train (131x131)')
plt.plot(history_baseline.history['val_accuracy'], label='Validation (131x131)')

plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

model.summary()

# Predict the output on the test set
predictions = model.predict(test131, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test131.classes

# Get the label to class mapping from the generator
class_labels = list(test131.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/30
283/283 [==============================] - 23s 73ms/step - loss: 2.5742 - accuracy: 0.1586 - val_loss: 2.1850 - val_accuracy: 0.3103
Epoch 2/30
283/283 [==============================] - 20s 70ms/step - loss: 2.1686 - accuracy: 0.3099 - val_loss: 1.8629 - val_accuracy: 0.4077
Epoch 3/30
283/283 [==============================] - 20s 70ms/step - loss: 1.8989 - accuracy: 0.4065 - val_loss: 1.5450 - val_accuracy: 0.5243
Epoch 4/30
283/283 [==============================] - 20s 70ms/step - loss: 1.6217 - accuracy: 0.4983 - val_loss: 1.4068 - val_accuracy: 0.5487
Epoch 5/30
283/283 [==============================] - 20s 71ms/step - loss: 1.3809 - accuracy: 0.5616 - val_loss: 1.2659 - val_accuracy: 0.5963
Epoch 6/30
283/283 [==============================] - 20s 70ms/step - loss: 1.2266 - accuracy: 0.6079 - val_loss: 0.9230 - val_accuracy: 0.7163
Epoch 7/30
283/283 [==============================] - 20s 71ms/step - loss: 1.0907 - accuracy: 0.6534 - val_loss: 0.8417 - val_accuracy: 0.7377
Epoch 8/30
283/283 [==============================] - 20s 70ms/step - loss: 1.0225 - accuracy: 0.6748 - val_loss: 0.9068 - val_accuracy: 0.7193
Epoch 9/30
283/283 [==============================] - 20s 70ms/step - loss: 0.8926 - accuracy: 0.7106 - val_loss: 0.8561 - val_accuracy: 0.7260
Epoch 10/30
283/283 [==============================] - 20s 71ms/step - loss: 0.8160 - accuracy: 0.7353 - val_loss: 0.6517 - val_accuracy: 0.8057
Epoch 11/30
283/283 [==============================] - 20s 71ms/step - loss: 0.7577 - accuracy: 0.7522 - val_loss: 0.6086 - val_accuracy: 0.8163
Epoch 12/30
283/283 [==============================] - 20s 71ms/step - loss: 0.7470 - accuracy: 0.7523 - val_loss: 0.6944 - val_accuracy: 0.8007
Epoch 13/30
283/283 [==============================] - 20s 71ms/step - loss: 0.7153 - accuracy: 0.7635 - val_loss: 0.6067 - val_accuracy: 0.8247
Epoch 14/30
283/283 [==============================] - 20s 71ms/step - loss: 0.6720 - accuracy: 0.7787 - val_loss: 0.6535 - val_accuracy: 0.8117
Epoch 15/30
283/283 [==============================] - 20s 71ms/step - loss: 0.6072 - accuracy: 0.7957 - val_loss: 0.4993 - val_accuracy: 0.8533
Epoch 16/30
283/283 [==============================] - 20s 71ms/step - loss: 0.5618 - accuracy: 0.8132 - val_loss: 0.5922 - val_accuracy: 0.8290
Epoch 17/30
283/283 [==============================] - 20s 71ms/step - loss: 0.5493 - accuracy: 0.8141 - val_loss: 0.5159 - val_accuracy: 0.8567
Epoch 18/30
283/283 [==============================] - 20s 71ms/step - loss: 0.5506 - accuracy: 0.8135 - val_loss: 0.6964 - val_accuracy: 0.8033
Epoch 19/30
283/283 [==============================] - 45s 160ms/step - loss: 0.5177 - accuracy: 0.8263 - val_loss: 0.4663 - val_accuracy: 0.8660
Epoch 20/30
283/283 [==============================] - 35s 124ms/step - loss: 0.4888 - accuracy: 0.8290 - val_loss: 0.5195 - val_accuracy: 0.8510
Epoch 21/30
283/283 [==============================] - 20s 70ms/step - loss: 0.5133 - accuracy: 0.8311 - val_loss: 0.5166 - val_accuracy: 0.8550
Epoch 22/30
283/283 [==============================] - 48s 171ms/step - loss: 0.4521 - accuracy: 0.8509 - val_loss: 0.5231 - val_accuracy: 0.8550
Epoch 23/30
283/283 [==============================] - 20s 70ms/step - loss: 0.4189 - accuracy: 0.8556 - val_loss: 0.6156 - val_accuracy: 0.8263
Epoch 24/30
283/283 [==============================] - 37s 132ms/step - loss: 0.4411 - accuracy: 0.8518 - val_loss: 0.4833 - val_accuracy: 0.8717
Epoch 25/30
283/283 [==============================] - 51s 179ms/step - loss: 0.3936 - accuracy: 0.8640 - val_loss: 0.4458 - val_accuracy: 0.8713
Epoch 26/30
283/283 [==============================] - 20s 70ms/step - loss: 0.4157 - accuracy: 0.8576 - val_loss: 0.4391 - val_accuracy: 0.8863
Epoch 27/30
283/283 [==============================] - 20s 70ms/step - loss: 0.3737 - accuracy: 0.8707 - val_loss: 0.6960 - val_accuracy: 0.8277
Epoch 28/30
283/283 [==============================] - 20s 70ms/step - loss: 0.3728 - accuracy: 0.8698 - val_loss: 0.4843 - val_accuracy: 0.8793
Epoch 29/30
283/283 [==============================] - 21s 74ms/step - loss: 0.3813 - accuracy: 0.8670 - val_loss: 0.4316 - val_accuracy: 0.8790
Epoch 30/30
283/283 [==============================] - 20s 71ms/step - loss: 0.3822 - accuracy: 0.8695 - val_loss: 0.4974 - val_accuracy: 0.8657
CNN Error: 13.47%
No description has been provided for this image
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_6 (Conv2D)           (None, 127, 127, 64)      1664      
                                                                 
 max_pooling2d_6 (MaxPooling  (None, 63, 63, 64)       0         
 2D)                                                             
                                                                 
 conv2d_7 (Conv2D)           (None, 61, 61, 128)       73856     
                                                                 
 max_pooling2d_7 (MaxPooling  (None, 30, 30, 128)      0         
 2D)                                                             
                                                                 
 dropout_4 (Dropout)         (None, 30, 30, 128)       0         
                                                                 
 flatten_3 (Flatten)         (None, 115200)            0         
                                                                 
 dense_7 (Dense)             (None, 256)               29491456  
                                                                 
 dropout_5 (Dropout)         (None, 256)               0         
                                                                 
 dense_8 (Dense)             (None, 128)               32896     
                                                                 
 dense_9 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 29,601,807
Trainable params: 29,601,807
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 2s 18ms/step
No description has been provided for this image

comments on baseline model:

Increasing Epochs¶


epochs =30 ->100

In [33]:
# Fix random seed for reproducibility
seed = 88
np.random.seed(seed)

# Create the model
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(37, 37, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))
model.add(Flatten())

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)


# Fit the model with early stopping and model checkpoint
history = model.fit(
    train37, 
    validation_data=val37, 
    epochs=100, 
    batch_size=200, 
    verbose=1, 
    class_weight=class_weights37, 
    callbacks=[early_stopping]
)


# Final evaluation of the model
scores = model.evaluate(test37, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))

# Plot accuracy
plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (37x37)')
plt.plot(history.history['val_accuracy'], label='Validation (37x37)')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

# Print model summary
model.summary()

# Predict the output on the test set
predictions = model.predict(test37, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test37.classes

# Get the label to class mapping from the generator
class_labels = list(test37.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/100
280/283 [============================>.] - ETA: 0s - loss: 2.6306 - accuracy: 0.1135
Epoch 1: val_accuracy improved from -inf to 0.20067, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 22ms/step - loss: 2.6304 - accuracy: 0.1142 - val_loss: 2.4502 - val_accuracy: 0.2007
Epoch 2/100
281/283 [============================>.] - ETA: 0s - loss: 2.3637 - accuracy: 0.2450
Epoch 2: val_accuracy improved from 0.20067 to 0.34100, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.3633 - accuracy: 0.2459 - val_loss: 2.0614 - val_accuracy: 0.3410
Epoch 3/100
280/283 [============================>.] - ETA: 0s - loss: 2.1330 - accuracy: 0.3230
Epoch 3: val_accuracy improved from 0.34100 to 0.39333, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.1320 - accuracy: 0.3230 - val_loss: 1.9185 - val_accuracy: 0.3933
Epoch 4/100
281/283 [============================>.] - ETA: 0s - loss: 1.9333 - accuracy: 0.3833
Epoch 4: val_accuracy improved from 0.39333 to 0.49167, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.9313 - accuracy: 0.3839 - val_loss: 1.6282 - val_accuracy: 0.4917
Epoch 5/100
283/283 [==============================] - ETA: 0s - loss: 1.7568 - accuracy: 0.4339
Epoch 5: val_accuracy improved from 0.49167 to 0.52500, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.7568 - accuracy: 0.4339 - val_loss: 1.4928 - val_accuracy: 0.5250
Epoch 6/100
282/283 [============================>.] - ETA: 0s - loss: 1.6014 - accuracy: 0.4825
Epoch 6: val_accuracy improved from 0.52500 to 0.55633, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.6012 - accuracy: 0.4826 - val_loss: 1.4068 - val_accuracy: 0.5563
Epoch 7/100
283/283 [==============================] - ETA: 0s - loss: 1.5175 - accuracy: 0.5148
Epoch 7: val_accuracy improved from 0.55633 to 0.60700, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.5175 - accuracy: 0.5148 - val_loss: 1.2588 - val_accuracy: 0.6070
Epoch 8/100
280/283 [============================>.] - ETA: 0s - loss: 1.4262 - accuracy: 0.5450
Epoch 8: val_accuracy did not improve from 0.60700
283/283 [==============================] - 6s 21ms/step - loss: 1.4257 - accuracy: 0.5443 - val_loss: 1.2419 - val_accuracy: 0.6007
Epoch 9/100
280/283 [============================>.] - ETA: 0s - loss: 1.3476 - accuracy: 0.5598
Epoch 9: val_accuracy improved from 0.60700 to 0.68233, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.3458 - accuracy: 0.5607 - val_loss: 1.0223 - val_accuracy: 0.6823
Epoch 10/100
282/283 [============================>.] - ETA: 0s - loss: 1.2610 - accuracy: 0.5887
Epoch 10: val_accuracy did not improve from 0.68233
283/283 [==============================] - 6s 21ms/step - loss: 1.2605 - accuracy: 0.5888 - val_loss: 1.0041 - val_accuracy: 0.6770
Epoch 11/100
281/283 [============================>.] - ETA: 0s - loss: 1.2133 - accuracy: 0.5990
Epoch 11: val_accuracy improved from 0.68233 to 0.68600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 24ms/step - loss: 1.2151 - accuracy: 0.5988 - val_loss: 0.9858 - val_accuracy: 0.6860
Epoch 12/100
282/283 [============================>.] - ETA: 0s - loss: 1.1295 - accuracy: 0.6262
Epoch 12: val_accuracy improved from 0.68600 to 0.72300, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.1293 - accuracy: 0.6259 - val_loss: 0.9022 - val_accuracy: 0.7230
Epoch 13/100
282/283 [============================>.] - ETA: 0s - loss: 1.0806 - accuracy: 0.6361
Epoch 13: val_accuracy did not improve from 0.72300
283/283 [==============================] - 6s 21ms/step - loss: 1.0802 - accuracy: 0.6361 - val_loss: 0.8938 - val_accuracy: 0.7140
Epoch 14/100
283/283 [==============================] - ETA: 0s - loss: 1.0481 - accuracy: 0.6521
Epoch 14: val_accuracy improved from 0.72300 to 0.74600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.0481 - accuracy: 0.6521 - val_loss: 0.8169 - val_accuracy: 0.7460
Epoch 15/100
281/283 [============================>.] - ETA: 0s - loss: 0.9934 - accuracy: 0.6615
Epoch 15: val_accuracy did not improve from 0.74600
283/283 [==============================] - 7s 23ms/step - loss: 0.9939 - accuracy: 0.6609 - val_loss: 0.8397 - val_accuracy: 0.7410
Epoch 16/100
280/283 [============================>.] - ETA: 0s - loss: 0.9905 - accuracy: 0.6710
Epoch 16: val_accuracy improved from 0.74600 to 0.78133, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.9903 - accuracy: 0.6709 - val_loss: 0.7199 - val_accuracy: 0.7813
Epoch 17/100
281/283 [============================>.] - ETA: 0s - loss: 0.9351 - accuracy: 0.6851
Epoch 17: val_accuracy did not improve from 0.78133
283/283 [==============================] - 7s 23ms/step - loss: 0.9350 - accuracy: 0.6855 - val_loss: 0.7775 - val_accuracy: 0.7597
Epoch 18/100
281/283 [============================>.] - ETA: 0s - loss: 0.8837 - accuracy: 0.7030
Epoch 18: val_accuracy did not improve from 0.78133
283/283 [==============================] - 7s 24ms/step - loss: 0.8819 - accuracy: 0.7028 - val_loss: 0.7634 - val_accuracy: 0.7697
Epoch 19/100
283/283 [==============================] - ETA: 0s - loss: 0.8853 - accuracy: 0.7002
Epoch 19: val_accuracy improved from 0.78133 to 0.80000, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8853 - accuracy: 0.7002 - val_loss: 0.6528 - val_accuracy: 0.8000
Epoch 20/100
282/283 [============================>.] - ETA: 0s - loss: 0.8584 - accuracy: 0.7018
Epoch 20: val_accuracy improved from 0.80000 to 0.80933, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8583 - accuracy: 0.7020 - val_loss: 0.6237 - val_accuracy: 0.8093
Epoch 21/100
283/283 [==============================] - ETA: 0s - loss: 0.8134 - accuracy: 0.7239
Epoch 21: val_accuracy did not improve from 0.80933
283/283 [==============================] - 6s 22ms/step - loss: 0.8134 - accuracy: 0.7239 - val_loss: 0.6308 - val_accuracy: 0.8063
Epoch 22/100
282/283 [============================>.] - ETA: 0s - loss: 0.7952 - accuracy: 0.7265
Epoch 22: val_accuracy did not improve from 0.80933
283/283 [==============================] - 6s 22ms/step - loss: 0.7950 - accuracy: 0.7265 - val_loss: 0.6696 - val_accuracy: 0.7913
Epoch 23/100
282/283 [============================>.] - ETA: 0s - loss: 0.7757 - accuracy: 0.7393
Epoch 23: val_accuracy improved from 0.80933 to 0.82333, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.7767 - accuracy: 0.7387 - val_loss: 0.5701 - val_accuracy: 0.8233
Epoch 24/100
283/283 [==============================] - ETA: 0s - loss: 0.7766 - accuracy: 0.7398
Epoch 24: val_accuracy did not improve from 0.82333
283/283 [==============================] - 6s 22ms/step - loss: 0.7766 - accuracy: 0.7398 - val_loss: 0.5888 - val_accuracy: 0.8173
Epoch 25/100
282/283 [============================>.] - ETA: 0s - loss: 0.7236 - accuracy: 0.7508
Epoch 25: val_accuracy did not improve from 0.82333
283/283 [==============================] - 6s 22ms/step - loss: 0.7236 - accuracy: 0.7507 - val_loss: 0.5694 - val_accuracy: 0.8230
Epoch 26/100
283/283 [==============================] - ETA: 0s - loss: 0.7177 - accuracy: 0.7503
Epoch 26: val_accuracy did not improve from 0.82333
283/283 [==============================] - 6s 22ms/step - loss: 0.7177 - accuracy: 0.7503 - val_loss: 0.6096 - val_accuracy: 0.8127
Epoch 27/100
281/283 [============================>.] - ETA: 0s - loss: 0.6950 - accuracy: 0.7613
Epoch 27: val_accuracy did not improve from 0.82333
283/283 [==============================] - 6s 23ms/step - loss: 0.6982 - accuracy: 0.7603 - val_loss: 0.6221 - val_accuracy: 0.8043
Epoch 28/100
281/283 [============================>.] - ETA: 0s - loss: 0.6751 - accuracy: 0.7687
Epoch 28: val_accuracy improved from 0.82333 to 0.82867, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.6740 - accuracy: 0.7689 - val_loss: 0.5593 - val_accuracy: 0.8287
Epoch 29/100
283/283 [==============================] - ETA: 0s - loss: 0.6554 - accuracy: 0.7751
Epoch 29: val_accuracy did not improve from 0.82867
283/283 [==============================] - 6s 22ms/step - loss: 0.6554 - accuracy: 0.7751 - val_loss: 0.5911 - val_accuracy: 0.8250
Epoch 30/100
281/283 [============================>.] - ETA: 0s - loss: 0.6668 - accuracy: 0.7730
Epoch 30: val_accuracy improved from 0.82867 to 0.83700, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.6672 - accuracy: 0.7729 - val_loss: 0.5149 - val_accuracy: 0.8370
Epoch 31/100
282/283 [============================>.] - ETA: 0s - loss: 0.6349 - accuracy: 0.7793
Epoch 31: val_accuracy did not improve from 0.83700
283/283 [==============================] - 6s 22ms/step - loss: 0.6345 - accuracy: 0.7792 - val_loss: 0.5839 - val_accuracy: 0.8357
Epoch 32/100
283/283 [==============================] - ETA: 0s - loss: 0.6503 - accuracy: 0.7765
Epoch 32: val_accuracy did not improve from 0.83700
283/283 [==============================] - 6s 22ms/step - loss: 0.6503 - accuracy: 0.7765 - val_loss: 0.5887 - val_accuracy: 0.8267
Epoch 33/100
283/283 [==============================] - ETA: 0s - loss: 0.6295 - accuracy: 0.7831
Epoch 33: val_accuracy improved from 0.83700 to 0.85033, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6295 - accuracy: 0.7831 - val_loss: 0.5222 - val_accuracy: 0.8503
Epoch 34/100
283/283 [==============================] - ETA: 0s - loss: 0.6090 - accuracy: 0.7888
Epoch 34: val_accuracy improved from 0.85033 to 0.85367, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6090 - accuracy: 0.7888 - val_loss: 0.4842 - val_accuracy: 0.8537
Epoch 35/100
282/283 [============================>.] - ETA: 0s - loss: 0.5997 - accuracy: 0.7940
Epoch 35: val_accuracy improved from 0.85367 to 0.86100, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5997 - accuracy: 0.7942 - val_loss: 0.4738 - val_accuracy: 0.8610
Epoch 36/100
283/283 [==============================] - ETA: 0s - loss: 0.5735 - accuracy: 0.8046
Epoch 36: val_accuracy improved from 0.86100 to 0.86800, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5735 - accuracy: 0.8046 - val_loss: 0.4491 - val_accuracy: 0.8680
Epoch 37/100
282/283 [============================>.] - ETA: 0s - loss: 0.5783 - accuracy: 0.7975
Epoch 37: val_accuracy did not improve from 0.86800
283/283 [==============================] - 6s 22ms/step - loss: 0.5781 - accuracy: 0.7975 - val_loss: 0.4729 - val_accuracy: 0.8603
Epoch 38/100
281/283 [============================>.] - ETA: 0s - loss: 0.5633 - accuracy: 0.8091
Epoch 38: val_accuracy did not improve from 0.86800
283/283 [==============================] - 6s 23ms/step - loss: 0.5638 - accuracy: 0.8089 - val_loss: 0.4959 - val_accuracy: 0.8550
Epoch 39/100
282/283 [============================>.] - ETA: 0s - loss: 0.5871 - accuracy: 0.7987
Epoch 39: val_accuracy did not improve from 0.86800
283/283 [==============================] - 7s 23ms/step - loss: 0.5866 - accuracy: 0.7987 - val_loss: 0.4972 - val_accuracy: 0.8517
Epoch 40/100
282/283 [============================>.] - ETA: 0s - loss: 0.5417 - accuracy: 0.8102
Epoch 40: val_accuracy did not improve from 0.86800
283/283 [==============================] - 7s 23ms/step - loss: 0.5402 - accuracy: 0.8108 - val_loss: 0.4919 - val_accuracy: 0.8610
Epoch 41/100
282/283 [============================>.] - ETA: 0s - loss: 0.5390 - accuracy: 0.8080
Epoch 41: val_accuracy improved from 0.86800 to 0.87433, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.5405 - accuracy: 0.8079 - val_loss: 0.4517 - val_accuracy: 0.8743
Epoch 42/100
281/283 [============================>.] - ETA: 0s - loss: 0.5593 - accuracy: 0.8042
Epoch 42: val_accuracy did not improve from 0.87433
283/283 [==============================] - 6s 22ms/step - loss: 0.5580 - accuracy: 0.8045 - val_loss: 0.6774 - val_accuracy: 0.7930
Epoch 43/100
281/283 [============================>.] - ETA: 0s - loss: 0.5386 - accuracy: 0.8083
Epoch 43: val_accuracy did not improve from 0.87433
283/283 [==============================] - 6s 22ms/step - loss: 0.5389 - accuracy: 0.8087 - val_loss: 0.4371 - val_accuracy: 0.8683
Epoch 44/100
282/283 [============================>.] - ETA: 0s - loss: 0.4936 - accuracy: 0.8260
Epoch 44: val_accuracy improved from 0.87433 to 0.87867, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4941 - accuracy: 0.8259 - val_loss: 0.4359 - val_accuracy: 0.8787
Epoch 45/100
283/283 [==============================] - ETA: 0s - loss: 0.5155 - accuracy: 0.8192
Epoch 45: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 22ms/step - loss: 0.5155 - accuracy: 0.8192 - val_loss: 0.4439 - val_accuracy: 0.8657
Epoch 46/100
283/283 [==============================] - ETA: 0s - loss: 0.5090 - accuracy: 0.8252
Epoch 46: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 22ms/step - loss: 0.5090 - accuracy: 0.8252 - val_loss: 0.4767 - val_accuracy: 0.8677
Epoch 47/100
283/283 [==============================] - ETA: 0s - loss: 0.4816 - accuracy: 0.8289
Epoch 47: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 22ms/step - loss: 0.4816 - accuracy: 0.8289 - val_loss: 0.4980 - val_accuracy: 0.8557
Epoch 48/100
282/283 [============================>.] - ETA: 0s - loss: 0.4854 - accuracy: 0.8254
Epoch 48: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 23ms/step - loss: 0.4845 - accuracy: 0.8257 - val_loss: 0.4641 - val_accuracy: 0.8610
Epoch 49/100
281/283 [============================>.] - ETA: 0s - loss: 0.4785 - accuracy: 0.8291
Epoch 49: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 22ms/step - loss: 0.4803 - accuracy: 0.8284 - val_loss: 0.4388 - val_accuracy: 0.8683
Epoch 50/100
283/283 [==============================] - ETA: 0s - loss: 0.4774 - accuracy: 0.8296
Epoch 50: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 22ms/step - loss: 0.4774 - accuracy: 0.8296 - val_loss: 0.4906 - val_accuracy: 0.8610
Epoch 51/100
282/283 [============================>.] - ETA: 0s - loss: 0.4605 - accuracy: 0.8367
Epoch 51: val_accuracy did not improve from 0.87867
283/283 [==============================] - 6s 22ms/step - loss: 0.4610 - accuracy: 0.8365 - val_loss: 0.4570 - val_accuracy: 0.8637
Epoch 52/100
282/283 [============================>.] - ETA: 0s - loss: 0.4439 - accuracy: 0.8438
Epoch 52: val_accuracy improved from 0.87867 to 0.88067, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.4440 - accuracy: 0.8438 - val_loss: 0.4340 - val_accuracy: 0.8807
Epoch 53/100
283/283 [==============================] - ETA: 0s - loss: 0.4633 - accuracy: 0.8384
Epoch 53: val_accuracy improved from 0.88067 to 0.88967, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4633 - accuracy: 0.8384 - val_loss: 0.4111 - val_accuracy: 0.8897
Epoch 54/100
283/283 [==============================] - ETA: 0s - loss: 0.4456 - accuracy: 0.8407
Epoch 54: val_accuracy did not improve from 0.88967
283/283 [==============================] - 6s 22ms/step - loss: 0.4456 - accuracy: 0.8407 - val_loss: 0.4370 - val_accuracy: 0.8797
Epoch 55/100
282/283 [============================>.] - ETA: 0s - loss: 0.4569 - accuracy: 0.8389
Epoch 55: val_accuracy improved from 0.88967 to 0.89433, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4570 - accuracy: 0.8387 - val_loss: 0.3819 - val_accuracy: 0.8943
Epoch 56/100
282/283 [============================>.] - ETA: 0s - loss: 0.4611 - accuracy: 0.8416
Epoch 56: val_accuracy improved from 0.89433 to 0.89733, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4613 - accuracy: 0.8418 - val_loss: 0.3628 - val_accuracy: 0.8973
Epoch 57/100
282/283 [============================>.] - ETA: 0s - loss: 0.4543 - accuracy: 0.8438
Epoch 57: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4542 - accuracy: 0.8436 - val_loss: 0.3891 - val_accuracy: 0.8877
Epoch 58/100
283/283 [==============================] - ETA: 0s - loss: 0.4474 - accuracy: 0.8395
Epoch 58: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4474 - accuracy: 0.8395 - val_loss: 0.3873 - val_accuracy: 0.8847
Epoch 59/100
282/283 [============================>.] - ETA: 0s - loss: 0.4426 - accuracy: 0.8439
Epoch 59: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4438 - accuracy: 0.8439 - val_loss: 0.4420 - val_accuracy: 0.8690
Epoch 60/100
282/283 [============================>.] - ETA: 0s - loss: 0.4336 - accuracy: 0.8495
Epoch 60: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4335 - accuracy: 0.8498 - val_loss: 0.4238 - val_accuracy: 0.8793
Epoch 61/100
281/283 [============================>.] - ETA: 0s - loss: 0.4126 - accuracy: 0.8524
Epoch 61: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4140 - accuracy: 0.8523 - val_loss: 0.4181 - val_accuracy: 0.8830
Epoch 62/100
282/283 [============================>.] - ETA: 0s - loss: 0.4146 - accuracy: 0.8542
Epoch 62: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4139 - accuracy: 0.8545 - val_loss: 0.4810 - val_accuracy: 0.8660
Epoch 63/100
282/283 [============================>.] - ETA: 0s - loss: 0.4217 - accuracy: 0.8483
Epoch 63: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4208 - accuracy: 0.8487 - val_loss: 0.4340 - val_accuracy: 0.8783
Epoch 64/100
283/283 [==============================] - ETA: 0s - loss: 0.4111 - accuracy: 0.8558
Epoch 64: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.4111 - accuracy: 0.8558 - val_loss: 0.4265 - val_accuracy: 0.8820
Epoch 65/100
282/283 [============================>.] - ETA: 0s - loss: 0.3990 - accuracy: 0.8605
Epoch 65: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.3995 - accuracy: 0.8605 - val_loss: 0.3709 - val_accuracy: 0.8973
Epoch 66/100
282/283 [============================>.] - ETA: 0s - loss: 0.3894 - accuracy: 0.8609Restoring model weights from the end of the best epoch: 56.

Epoch 66: val_accuracy did not improve from 0.89733
283/283 [==============================] - 6s 22ms/step - loss: 0.3892 - accuracy: 0.8610 - val_loss: 0.4171 - val_accuracy: 0.8893
Epoch 66: early stopping
CNN Error: 10.43%
No description has been provided for this image
Model: "sequential_13"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_33 (Conv2D)          (None, 33, 33, 64)        1664      
                                                                 
 max_pooling2d_33 (MaxPoolin  (None, 16, 16, 64)       0         
 g2D)                                                            
                                                                 
 conv2d_34 (Conv2D)          (None, 14, 14, 128)       73856     
                                                                 
 max_pooling2d_34 (MaxPoolin  (None, 7, 7, 128)        0         
 g2D)                                                            
                                                                 
 dropout_24 (Dropout)        (None, 7, 7, 128)         0         
                                                                 
 flatten_13 (Flatten)        (None, 6272)              0         
                                                                 
 dense_37 (Dense)            (None, 256)               1605888   
                                                                 
 dropout_25 (Dropout)        (None, 256)               0         
                                                                 
 dense_38 (Dense)            (None, 128)               32896     
                                                                 
 dense_39 (Dense)            (None, 15)                1935      
                                                                 
=================================================================
Total params: 1,716,239
Trainable params: 1,716,239
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 1s 14ms/step
No description has been provided for this image
In [34]:
# Fix random seed for reproducibility
seed = 88
np.random.seed(seed)

# Create the model
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(131, 131, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))
model.add(Flatten())

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)


# Fit the model with early stopping and model checkpoint
history = model.fit(
    train131, 
    validation_data=val131, 
    epochs=100, 
    batch_size=200, 
    verbose=1, 
    class_weight=class_weights131, 
    callbacks=[early_stopping]
)


# Final evaluation of the model
scores = model.evaluate(test131, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))

# Plot accuracy
plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (131x131)')
plt.plot(history.history['val_accuracy'], label='Validation (131x131)')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

# Print model summary
model.summary()

# Predict the output on the test set
predictions = model.predict(test131, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test131.classes

# Get the label to class mapping from the generator
class_labels = list(test131.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/100
283/283 [==============================] - ETA: 0s - loss: 2.5587 - accuracy: 0.1626
Epoch 1: val_accuracy improved from -inf to 0.30233, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 22s 76ms/step - loss: 2.5587 - accuracy: 0.1626 - val_loss: 2.1553 - val_accuracy: 0.3023
Epoch 2/100
283/283 [==============================] - ETA: 0s - loss: 2.1517 - accuracy: 0.3090
Epoch 2: val_accuracy improved from 0.30233 to 0.35333, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 2.1517 - accuracy: 0.3090 - val_loss: 2.0407 - val_accuracy: 0.3533
Epoch 3/100
283/283 [==============================] - ETA: 0s - loss: 1.8954 - accuracy: 0.3980
Epoch 3: val_accuracy improved from 0.35333 to 0.47400, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 1.8954 - accuracy: 0.3980 - val_loss: 1.6601 - val_accuracy: 0.4740
Epoch 4/100
283/283 [==============================] - ETA: 0s - loss: 1.6597 - accuracy: 0.4796
Epoch 4: val_accuracy improved from 0.47400 to 0.58367, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 1.6597 - accuracy: 0.4796 - val_loss: 1.3296 - val_accuracy: 0.5837
Epoch 5/100
283/283 [==============================] - ETA: 0s - loss: 1.4490 - accuracy: 0.5452
Epoch 5: val_accuracy improved from 0.58367 to 0.66000, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 1.4490 - accuracy: 0.5452 - val_loss: 1.0916 - val_accuracy: 0.6600
Epoch 6/100
283/283 [==============================] - ETA: 0s - loss: 1.2626 - accuracy: 0.5914
Epoch 6: val_accuracy improved from 0.66000 to 0.69967, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 1.2626 - accuracy: 0.5914 - val_loss: 0.9670 - val_accuracy: 0.6997
Epoch 7/100
283/283 [==============================] - ETA: 0s - loss: 1.1277 - accuracy: 0.6436
Epoch 7: val_accuracy improved from 0.69967 to 0.72633, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 1.1277 - accuracy: 0.6436 - val_loss: 0.9086 - val_accuracy: 0.7263
Epoch 8/100
283/283 [==============================] - ETA: 0s - loss: 1.0134 - accuracy: 0.6759
Epoch 8: val_accuracy improved from 0.72633 to 0.77500, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 1.0134 - accuracy: 0.6759 - val_loss: 0.7730 - val_accuracy: 0.7750
Epoch 9/100
283/283 [==============================] - ETA: 0s - loss: 0.9191 - accuracy: 0.7070
Epoch 9: val_accuracy improved from 0.77500 to 0.77767, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 0.9191 - accuracy: 0.7070 - val_loss: 0.7611 - val_accuracy: 0.7777
Epoch 10/100
283/283 [==============================] - ETA: 0s - loss: 0.8504 - accuracy: 0.7232
Epoch 10: val_accuracy did not improve from 0.77767
283/283 [==============================] - 20s 71ms/step - loss: 0.8504 - accuracy: 0.7232 - val_loss: 0.7963 - val_accuracy: 0.7620
Epoch 11/100
283/283 [==============================] - ETA: 0s - loss: 0.7828 - accuracy: 0.7438
Epoch 11: val_accuracy did not improve from 0.77767
283/283 [==============================] - 20s 72ms/step - loss: 0.7828 - accuracy: 0.7438 - val_loss: 0.7983 - val_accuracy: 0.7567
Epoch 12/100
283/283 [==============================] - ETA: 0s - loss: 0.7091 - accuracy: 0.7676
Epoch 12: val_accuracy improved from 0.77767 to 0.78000, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.7091 - accuracy: 0.7676 - val_loss: 0.7645 - val_accuracy: 0.7800
Epoch 13/100
283/283 [==============================] - ETA: 0s - loss: 0.6727 - accuracy: 0.7784
Epoch 13: val_accuracy improved from 0.78000 to 0.81367, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.6727 - accuracy: 0.7784 - val_loss: 0.6228 - val_accuracy: 0.8137
Epoch 14/100
283/283 [==============================] - ETA: 0s - loss: 0.6320 - accuracy: 0.7872
Epoch 14: val_accuracy improved from 0.81367 to 0.84433, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.6320 - accuracy: 0.7872 - val_loss: 0.5459 - val_accuracy: 0.8443
Epoch 15/100
283/283 [==============================] - ETA: 0s - loss: 0.5941 - accuracy: 0.8049
Epoch 15: val_accuracy did not improve from 0.84433
283/283 [==============================] - 20s 71ms/step - loss: 0.5941 - accuracy: 0.8049 - val_loss: 0.5622 - val_accuracy: 0.8347
Epoch 16/100
283/283 [==============================] - ETA: 0s - loss: 0.5647 - accuracy: 0.8127
Epoch 16: val_accuracy improved from 0.84433 to 0.85500, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.5647 - accuracy: 0.8127 - val_loss: 0.5085 - val_accuracy: 0.8550
Epoch 17/100
283/283 [==============================] - ETA: 0s - loss: 0.5621 - accuracy: 0.8138
Epoch 17: val_accuracy did not improve from 0.85500
283/283 [==============================] - 20s 71ms/step - loss: 0.5621 - accuracy: 0.8138 - val_loss: 0.6215 - val_accuracy: 0.8330
Epoch 18/100
283/283 [==============================] - ETA: 0s - loss: 0.5393 - accuracy: 0.8179
Epoch 18: val_accuracy did not improve from 0.85500
283/283 [==============================] - 20s 72ms/step - loss: 0.5393 - accuracy: 0.8179 - val_loss: 0.5231 - val_accuracy: 0.8487
Epoch 19/100
283/283 [==============================] - ETA: 0s - loss: 0.4843 - accuracy: 0.8415
Epoch 19: val_accuracy did not improve from 0.85500
283/283 [==============================] - 20s 71ms/step - loss: 0.4843 - accuracy: 0.8415 - val_loss: 0.6018 - val_accuracy: 0.8347
Epoch 20/100
283/283 [==============================] - ETA: 0s - loss: 0.4987 - accuracy: 0.8322
Epoch 20: val_accuracy improved from 0.85500 to 0.86000, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.4987 - accuracy: 0.8322 - val_loss: 0.4905 - val_accuracy: 0.8600
Epoch 21/100
283/283 [==============================] - ETA: 0s - loss: 0.4525 - accuracy: 0.8465
Epoch 21: val_accuracy did not improve from 0.86000
283/283 [==============================] - 20s 72ms/step - loss: 0.4525 - accuracy: 0.8465 - val_loss: 0.6068 - val_accuracy: 0.8417
Epoch 22/100
283/283 [==============================] - ETA: 0s - loss: 0.4326 - accuracy: 0.8492
Epoch 22: val_accuracy did not improve from 0.86000
283/283 [==============================] - 20s 72ms/step - loss: 0.4326 - accuracy: 0.8492 - val_loss: 0.5019 - val_accuracy: 0.8533
Epoch 23/100
283/283 [==============================] - ETA: 0s - loss: 0.4358 - accuracy: 0.8561
Epoch 23: val_accuracy did not improve from 0.86000
283/283 [==============================] - 20s 72ms/step - loss: 0.4358 - accuracy: 0.8561 - val_loss: 0.4935 - val_accuracy: 0.8593
Epoch 24/100
283/283 [==============================] - ETA: 0s - loss: 0.4133 - accuracy: 0.8583
Epoch 24: val_accuracy did not improve from 0.86000
283/283 [==============================] - 20s 72ms/step - loss: 0.4133 - accuracy: 0.8583 - val_loss: 0.5203 - val_accuracy: 0.8540
Epoch 25/100
283/283 [==============================] - ETA: 0s - loss: 0.4058 - accuracy: 0.8625
Epoch 25: val_accuracy did not improve from 0.86000
283/283 [==============================] - 20s 72ms/step - loss: 0.4058 - accuracy: 0.8625 - val_loss: 0.4958 - val_accuracy: 0.8590
Epoch 26/100
283/283 [==============================] - ETA: 0s - loss: 0.3768 - accuracy: 0.8751
Epoch 26: val_accuracy did not improve from 0.86000
283/283 [==============================] - 20s 72ms/step - loss: 0.3768 - accuracy: 0.8751 - val_loss: 0.5500 - val_accuracy: 0.8543
Epoch 27/100
283/283 [==============================] - ETA: 0s - loss: 0.3723 - accuracy: 0.8720
Epoch 27: val_accuracy did not improve from 0.86000
283/283 [==============================] - 21s 72ms/step - loss: 0.3723 - accuracy: 0.8720 - val_loss: 0.5272 - val_accuracy: 0.8587
Epoch 28/100
283/283 [==============================] - ETA: 0s - loss: 0.4002 - accuracy: 0.8652
Epoch 28: val_accuracy improved from 0.86000 to 0.86500, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 75ms/step - loss: 0.4002 - accuracy: 0.8652 - val_loss: 0.5025 - val_accuracy: 0.8650
Epoch 29/100
283/283 [==============================] - ETA: 0s - loss: 0.3729 - accuracy: 0.8764
Epoch 29: val_accuracy improved from 0.86500 to 0.89000, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.3729 - accuracy: 0.8764 - val_loss: 0.4043 - val_accuracy: 0.8900
Epoch 30/100
283/283 [==============================] - ETA: 0s - loss: 0.3452 - accuracy: 0.8887
Epoch 30: val_accuracy did not improve from 0.89000
283/283 [==============================] - 20s 71ms/step - loss: 0.3452 - accuracy: 0.8887 - val_loss: 0.5755 - val_accuracy: 0.8547
Epoch 31/100
283/283 [==============================] - ETA: 0s - loss: 0.3498 - accuracy: 0.8787
Epoch 31: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 72ms/step - loss: 0.3498 - accuracy: 0.8787 - val_loss: 0.4346 - val_accuracy: 0.8890
Epoch 32/100
283/283 [==============================] - ETA: 0s - loss: 0.3327 - accuracy: 0.8869
Epoch 32: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 74ms/step - loss: 0.3327 - accuracy: 0.8869 - val_loss: 0.4489 - val_accuracy: 0.8807
Epoch 33/100
283/283 [==============================] - ETA: 0s - loss: 0.3037 - accuracy: 0.8977
Epoch 33: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 72ms/step - loss: 0.3037 - accuracy: 0.8977 - val_loss: 0.4092 - val_accuracy: 0.8847
Epoch 34/100
283/283 [==============================] - ETA: 0s - loss: 0.3136 - accuracy: 0.8962
Epoch 34: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 73ms/step - loss: 0.3136 - accuracy: 0.8962 - val_loss: 0.5195 - val_accuracy: 0.8677
Epoch 35/100
283/283 [==============================] - ETA: 0s - loss: 0.3332 - accuracy: 0.8851
Epoch 35: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 72ms/step - loss: 0.3332 - accuracy: 0.8851 - val_loss: 0.4928 - val_accuracy: 0.8707
Epoch 36/100
283/283 [==============================] - ETA: 0s - loss: 0.3191 - accuracy: 0.8897
Epoch 36: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 73ms/step - loss: 0.3191 - accuracy: 0.8897 - val_loss: 0.4200 - val_accuracy: 0.8853
Epoch 37/100
283/283 [==============================] - ETA: 0s - loss: 0.2862 - accuracy: 0.8998
Epoch 37: val_accuracy did not improve from 0.89000
283/283 [==============================] - 20s 72ms/step - loss: 0.2862 - accuracy: 0.8998 - val_loss: 0.5522 - val_accuracy: 0.8570
Epoch 38/100
283/283 [==============================] - ETA: 0s - loss: 0.2864 - accuracy: 0.9013
Epoch 38: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 73ms/step - loss: 0.2864 - accuracy: 0.9013 - val_loss: 0.5142 - val_accuracy: 0.8677
Epoch 39/100
283/283 [==============================] - ETA: 0s - loss: 0.2972 - accuracy: 0.9024Restoring model weights from the end of the best epoch: 29.

Epoch 39: val_accuracy did not improve from 0.89000
283/283 [==============================] - 21s 72ms/step - loss: 0.2972 - accuracy: 0.9024 - val_loss: 0.5026 - val_accuracy: 0.8740
Epoch 39: early stopping
CNN Error: 10.83%
No description has been provided for this image
Model: "sequential_14"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_35 (Conv2D)          (None, 127, 127, 64)      1664      
                                                                 
 max_pooling2d_35 (MaxPoolin  (None, 63, 63, 64)       0         
 g2D)                                                            
                                                                 
 conv2d_36 (Conv2D)          (None, 61, 61, 128)       73856     
                                                                 
 max_pooling2d_36 (MaxPoolin  (None, 30, 30, 128)      0         
 g2D)                                                            
                                                                 
 dropout_26 (Dropout)        (None, 30, 30, 128)       0         
                                                                 
 flatten_14 (Flatten)        (None, 115200)            0         
                                                                 
 dense_40 (Dense)            (None, 256)               29491456  
                                                                 
 dropout_27 (Dropout)        (None, 256)               0         
                                                                 
 dense_41 (Dense)            (None, 128)               32896     
                                                                 
 dense_42 (Dense)            (None, 15)                1935      
                                                                 
=================================================================
Total params: 29,601,807
Trainable params: 29,601,807
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 2s 19ms/step
No description has been provided for this image

Oberservations:

I have also realised that the 131x131 model has some underfitting whereby the validation accuracy > train accuracy, hence I would be trying L2 regularisation.

Reduce underfitting/ overfitting of data for My Model¶


  • Adding more Dropout layers
  • Adding regularisation L2

L2 Regularization (Ridge):

L2 regularization adds a penalty term proportional to the square of the weights to the loss function.

This penalty discourages large weights and helps to smooth out the learned parameters.

While L2 regularization is more commonly used to address overfitting,

it can indirectly prevent underfitting by ensuring that

the model doesn't over-rely on a small set of features or fit the training data too closely.

For text above: Cited from: Chatgpt

For code below, cited from: https://keras.io/api/layers/regularizers/

In [35]:
# fix random seed for reproducibility
seed = 88
np.random.seed(seed)

# create model
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(37, 37, 1), activation='relu',kernel_regularizer=l2(0.0001)))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256, activation='relu',kernel_regularizer=l2(0.0001)))
model.add(Dropout(0.25))

model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))

model.add(Dense(num_classes, activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)

# Fit the model with early stopping and model checkpoint
history = model.fit(
    train37, 
    validation_data=val37, 
    epochs=100, 
    batch_size=200, 
    verbose=1, 
    class_weight=class_weights37, 
    callbacks=[early_stopping]
)



# Final evaluation of the model
scores = model.evaluate(test37, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))

plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (37x37)')
plt.plot(history.history['val_accuracy'], label='Validation (37x37)')

plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

model.summary()

# Predict the output on the test set
predictions = model.predict(test37, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test37.classes

# Get the label to class mapping from the generator
class_labels = list(test37.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/100
282/283 [============================>.] - ETA: 0s - loss: 2.6987 - accuracy: 0.0800
Epoch 1: val_accuracy improved from -inf to 0.13500, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 11s 36ms/step - loss: 2.6974 - accuracy: 0.0806 - val_loss: 2.5704 - val_accuracy: 0.1350
Epoch 2/100
282/283 [============================>.] - ETA: 0s - loss: 2.5356 - accuracy: 0.1608
Epoch 2: val_accuracy improved from 0.13500 to 0.22433, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.5357 - accuracy: 0.1607 - val_loss: 2.3789 - val_accuracy: 0.2243
Epoch 3/100
281/283 [============================>.] - ETA: 0s - loss: 2.3736 - accuracy: 0.2163
Epoch 3: val_accuracy improved from 0.22433 to 0.30033, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 2.3736 - accuracy: 0.2167 - val_loss: 2.1937 - val_accuracy: 0.3003
Epoch 4/100
281/283 [============================>.] - ETA: 0s - loss: 2.2412 - accuracy: 0.2667
Epoch 4: val_accuracy improved from 0.30033 to 0.37233, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 21ms/step - loss: 2.2412 - accuracy: 0.2669 - val_loss: 1.9753 - val_accuracy: 0.3723
Epoch 5/100
282/283 [============================>.] - ETA: 0s - loss: 2.0749 - accuracy: 0.3229
Epoch 5: val_accuracy improved from 0.37233 to 0.38500, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.0753 - accuracy: 0.3226 - val_loss: 1.9351 - val_accuracy: 0.3850
Epoch 6/100
281/283 [============================>.] - ETA: 0s - loss: 1.9524 - accuracy: 0.3735
Epoch 6: val_accuracy improved from 0.38500 to 0.45133, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 23ms/step - loss: 1.9511 - accuracy: 0.3742 - val_loss: 1.6927 - val_accuracy: 0.4513
Epoch 7/100
281/283 [============================>.] - ETA: 0s - loss: 1.8105 - accuracy: 0.4205
Epoch 7: val_accuracy improved from 0.45133 to 0.50133, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 24ms/step - loss: 1.8094 - accuracy: 0.4208 - val_loss: 1.5988 - val_accuracy: 0.5013
Epoch 8/100
280/283 [============================>.] - ETA: 0s - loss: 1.6917 - accuracy: 0.4559
Epoch 8: val_accuracy improved from 0.50133 to 0.52667, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.6932 - accuracy: 0.4558 - val_loss: 1.5068 - val_accuracy: 0.5267
Epoch 9/100
281/283 [============================>.] - ETA: 0s - loss: 1.5979 - accuracy: 0.4949
Epoch 9: val_accuracy improved from 0.52667 to 0.56700, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 23ms/step - loss: 1.5991 - accuracy: 0.4942 - val_loss: 1.3936 - val_accuracy: 0.5670
Epoch 10/100
282/283 [============================>.] - ETA: 0s - loss: 1.4931 - accuracy: 0.5232
Epoch 10: val_accuracy improved from 0.56700 to 0.59600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.4925 - accuracy: 0.5233 - val_loss: 1.3325 - val_accuracy: 0.5960
Epoch 11/100
281/283 [============================>.] - ETA: 0s - loss: 1.4175 - accuracy: 0.5436
Epoch 11: val_accuracy improved from 0.59600 to 0.60600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.4154 - accuracy: 0.5441 - val_loss: 1.2518 - val_accuracy: 0.6060
Epoch 12/100
281/283 [============================>.] - ETA: 0s - loss: 1.3300 - accuracy: 0.5718
Epoch 12: val_accuracy improved from 0.60600 to 0.60633, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.3300 - accuracy: 0.5717 - val_loss: 1.2692 - val_accuracy: 0.6063
Epoch 13/100
283/283 [==============================] - ETA: 0s - loss: 1.2865 - accuracy: 0.5861
Epoch 13: val_accuracy improved from 0.60633 to 0.64967, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.2865 - accuracy: 0.5861 - val_loss: 1.1405 - val_accuracy: 0.6497
Epoch 14/100
282/283 [============================>.] - ETA: 0s - loss: 1.2259 - accuracy: 0.6045
Epoch 14: val_accuracy improved from 0.64967 to 0.65033, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.2262 - accuracy: 0.6045 - val_loss: 1.1329 - val_accuracy: 0.6503
Epoch 15/100
283/283 [==============================] - ETA: 0s - loss: 1.1641 - accuracy: 0.6293
Epoch 15: val_accuracy improved from 0.65033 to 0.66200, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.1641 - accuracy: 0.6293 - val_loss: 1.0732 - val_accuracy: 0.6620
Epoch 16/100
283/283 [==============================] - ETA: 0s - loss: 1.1017 - accuracy: 0.6469
Epoch 16: val_accuracy improved from 0.66200 to 0.70600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.1017 - accuracy: 0.6469 - val_loss: 0.9706 - val_accuracy: 0.7060
Epoch 17/100
282/283 [============================>.] - ETA: 0s - loss: 1.0817 - accuracy: 0.6476
Epoch 17: val_accuracy did not improve from 0.70600
283/283 [==============================] - 6s 22ms/step - loss: 1.0812 - accuracy: 0.6478 - val_loss: 0.9915 - val_accuracy: 0.6883
Epoch 18/100
282/283 [============================>.] - ETA: 0s - loss: 1.0310 - accuracy: 0.6709
Epoch 18: val_accuracy did not improve from 0.70600
283/283 [==============================] - 6s 22ms/step - loss: 1.0315 - accuracy: 0.6704 - val_loss: 1.0054 - val_accuracy: 0.6880
Epoch 19/100
281/283 [============================>.] - ETA: 0s - loss: 0.9882 - accuracy: 0.6758
Epoch 19: val_accuracy did not improve from 0.70600
283/283 [==============================] - 6s 22ms/step - loss: 0.9894 - accuracy: 0.6752 - val_loss: 0.9743 - val_accuracy: 0.7057
Epoch 20/100
282/283 [============================>.] - ETA: 0s - loss: 0.9641 - accuracy: 0.6881
Epoch 20: val_accuracy improved from 0.70600 to 0.74333, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 23ms/step - loss: 0.9640 - accuracy: 0.6880 - val_loss: 0.8723 - val_accuracy: 0.7433
Epoch 21/100
283/283 [==============================] - ETA: 0s - loss: 0.9417 - accuracy: 0.6922
Epoch 21: val_accuracy did not improve from 0.74333
283/283 [==============================] - 6s 22ms/step - loss: 0.9417 - accuracy: 0.6922 - val_loss: 0.8972 - val_accuracy: 0.7307
Epoch 22/100
281/283 [============================>.] - ETA: 0s - loss: 0.9078 - accuracy: 0.7121
Epoch 22: val_accuracy did not improve from 0.74333
283/283 [==============================] - 6s 22ms/step - loss: 0.9079 - accuracy: 0.7113 - val_loss: 0.9729 - val_accuracy: 0.7040
Epoch 23/100
281/283 [============================>.] - ETA: 0s - loss: 0.8624 - accuracy: 0.7175
Epoch 23: val_accuracy did not improve from 0.74333
283/283 [==============================] - 6s 23ms/step - loss: 0.8636 - accuracy: 0.7168 - val_loss: 0.9225 - val_accuracy: 0.7247
Epoch 24/100
283/283 [==============================] - ETA: 0s - loss: 0.8505 - accuracy: 0.7213
Epoch 24: val_accuracy improved from 0.74333 to 0.74433, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8505 - accuracy: 0.7213 - val_loss: 0.8636 - val_accuracy: 0.7443
Epoch 25/100
281/283 [============================>.] - ETA: 0s - loss: 0.8154 - accuracy: 0.7362
Epoch 25: val_accuracy improved from 0.74433 to 0.75133, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8156 - accuracy: 0.7359 - val_loss: 0.8644 - val_accuracy: 0.7513
Epoch 26/100
283/283 [==============================] - ETA: 0s - loss: 0.8130 - accuracy: 0.7372
Epoch 26: val_accuracy improved from 0.75133 to 0.76800, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8130 - accuracy: 0.7372 - val_loss: 0.7644 - val_accuracy: 0.7680
Epoch 27/100
282/283 [============================>.] - ETA: 0s - loss: 0.7792 - accuracy: 0.7429
Epoch 27: val_accuracy did not improve from 0.76800
283/283 [==============================] - 6s 22ms/step - loss: 0.7795 - accuracy: 0.7426 - val_loss: 0.8351 - val_accuracy: 0.7417
Epoch 28/100
283/283 [==============================] - ETA: 0s - loss: 0.7576 - accuracy: 0.7529
Epoch 28: val_accuracy did not improve from 0.76800
283/283 [==============================] - 6s 22ms/step - loss: 0.7576 - accuracy: 0.7529 - val_loss: 0.8636 - val_accuracy: 0.7473
Epoch 29/100
281/283 [============================>.] - ETA: 0s - loss: 0.7309 - accuracy: 0.7574
Epoch 29: val_accuracy improved from 0.76800 to 0.77600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 23ms/step - loss: 0.7309 - accuracy: 0.7575 - val_loss: 0.7692 - val_accuracy: 0.7760
Epoch 30/100
283/283 [==============================] - ETA: 0s - loss: 0.7029 - accuracy: 0.7713
Epoch 30: val_accuracy did not improve from 0.77600
283/283 [==============================] - 6s 23ms/step - loss: 0.7029 - accuracy: 0.7713 - val_loss: 0.8028 - val_accuracy: 0.7670
Epoch 31/100
283/283 [==============================] - ETA: 0s - loss: 0.7341 - accuracy: 0.7623
Epoch 31: val_accuracy improved from 0.77600 to 0.79433, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.7341 - accuracy: 0.7623 - val_loss: 0.7046 - val_accuracy: 0.7943
Epoch 32/100
281/283 [============================>.] - ETA: 0s - loss: 0.7139 - accuracy: 0.7705
Epoch 32: val_accuracy did not improve from 0.79433
283/283 [==============================] - 6s 23ms/step - loss: 0.7132 - accuracy: 0.7703 - val_loss: 0.7789 - val_accuracy: 0.7773
Epoch 33/100
281/283 [============================>.] - ETA: 0s - loss: 0.6761 - accuracy: 0.7796
Epoch 33: val_accuracy improved from 0.79433 to 0.79767, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6760 - accuracy: 0.7798 - val_loss: 0.6829 - val_accuracy: 0.7977
Epoch 34/100
282/283 [============================>.] - ETA: 0s - loss: 0.6321 - accuracy: 0.7959
Epoch 34: val_accuracy improved from 0.79767 to 0.80133, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6323 - accuracy: 0.7960 - val_loss: 0.6800 - val_accuracy: 0.8013
Epoch 35/100
283/283 [==============================] - ETA: 0s - loss: 0.6809 - accuracy: 0.7784
Epoch 35: val_accuracy improved from 0.80133 to 0.80400, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 24ms/step - loss: 0.6809 - accuracy: 0.7784 - val_loss: 0.6935 - val_accuracy: 0.8040
Epoch 36/100
283/283 [==============================] - ETA: 0s - loss: 0.6358 - accuracy: 0.7943
Epoch 36: val_accuracy did not improve from 0.80400
283/283 [==============================] - 6s 22ms/step - loss: 0.6358 - accuracy: 0.7943 - val_loss: 0.7393 - val_accuracy: 0.7943
Epoch 37/100
280/283 [============================>.] - ETA: 0s - loss: 0.6140 - accuracy: 0.8019
Epoch 37: val_accuracy did not improve from 0.80400
283/283 [==============================] - 6s 22ms/step - loss: 0.6142 - accuracy: 0.8017 - val_loss: 0.7471 - val_accuracy: 0.7887
Epoch 38/100
283/283 [==============================] - ETA: 0s - loss: 0.6188 - accuracy: 0.7997
Epoch 38: val_accuracy did not improve from 0.80400
283/283 [==============================] - 6s 22ms/step - loss: 0.6188 - accuracy: 0.7997 - val_loss: 0.7980 - val_accuracy: 0.7837
Epoch 39/100
282/283 [============================>.] - ETA: 0s - loss: 0.6177 - accuracy: 0.7979
Epoch 39: val_accuracy did not improve from 0.80400
283/283 [==============================] - 7s 24ms/step - loss: 0.6183 - accuracy: 0.7975 - val_loss: 0.7546 - val_accuracy: 0.7917
Epoch 40/100
282/283 [============================>.] - ETA: 0s - loss: 0.5858 - accuracy: 0.8115
Epoch 40: val_accuracy did not improve from 0.80400
283/283 [==============================] - 7s 23ms/step - loss: 0.5853 - accuracy: 0.8114 - val_loss: 0.7253 - val_accuracy: 0.8003
Epoch 41/100
280/283 [============================>.] - ETA: 0s - loss: 0.6003 - accuracy: 0.8033
Epoch 41: val_accuracy improved from 0.80400 to 0.82167, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 7s 23ms/step - loss: 0.6002 - accuracy: 0.8035 - val_loss: 0.6443 - val_accuracy: 0.8217
Epoch 42/100
282/283 [============================>.] - ETA: 0s - loss: 0.5897 - accuracy: 0.8065
Epoch 42: val_accuracy did not improve from 0.82167
283/283 [==============================] - 6s 23ms/step - loss: 0.5902 - accuracy: 0.8060 - val_loss: 0.6889 - val_accuracy: 0.8053
Epoch 43/100
283/283 [==============================] - ETA: 0s - loss: 0.5855 - accuracy: 0.8114
Epoch 43: val_accuracy improved from 0.82167 to 0.83100, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.5855 - accuracy: 0.8114 - val_loss: 0.6346 - val_accuracy: 0.8310
Epoch 44/100
283/283 [==============================] - ETA: 0s - loss: 0.5540 - accuracy: 0.8227
Epoch 44: val_accuracy did not improve from 0.83100
283/283 [==============================] - 6s 22ms/step - loss: 0.5540 - accuracy: 0.8227 - val_loss: 0.7291 - val_accuracy: 0.7957
Epoch 45/100
283/283 [==============================] - ETA: 0s - loss: 0.5706 - accuracy: 0.8142
Epoch 45: val_accuracy did not improve from 0.83100
283/283 [==============================] - 6s 22ms/step - loss: 0.5706 - accuracy: 0.8142 - val_loss: 0.7632 - val_accuracy: 0.7897
Epoch 46/100
281/283 [============================>.] - ETA: 0s - loss: 0.5847 - accuracy: 0.8124
Epoch 46: val_accuracy improved from 0.83100 to 0.83467, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 0.5835 - accuracy: 0.8128 - val_loss: 0.6000 - val_accuracy: 0.8347
Epoch 47/100
283/283 [==============================] - ETA: 0s - loss: 0.5397 - accuracy: 0.8250
Epoch 47: val_accuracy did not improve from 0.83467
283/283 [==============================] - 6s 23ms/step - loss: 0.5397 - accuracy: 0.8250 - val_loss: 0.6436 - val_accuracy: 0.8313
Epoch 48/100
283/283 [==============================] - ETA: 0s - loss: 0.5232 - accuracy: 0.8301
Epoch 48: val_accuracy did not improve from 0.83467
283/283 [==============================] - 6s 23ms/step - loss: 0.5232 - accuracy: 0.8301 - val_loss: 0.6214 - val_accuracy: 0.8283
Epoch 49/100
281/283 [============================>.] - ETA: 0s - loss: 0.5290 - accuracy: 0.8298
Epoch 49: val_accuracy did not improve from 0.83467
283/283 [==============================] - 7s 23ms/step - loss: 0.5288 - accuracy: 0.8296 - val_loss: 0.6337 - val_accuracy: 0.8263
Epoch 50/100
281/283 [============================>.] - ETA: 0s - loss: 0.5269 - accuracy: 0.8334
Epoch 50: val_accuracy did not improve from 0.83467
283/283 [==============================] - 6s 22ms/step - loss: 0.5275 - accuracy: 0.8330 - val_loss: 0.6399 - val_accuracy: 0.8340
Epoch 51/100
281/283 [============================>.] - ETA: 0s - loss: 0.4973 - accuracy: 0.8400
Epoch 51: val_accuracy did not improve from 0.83467
283/283 [==============================] - 6s 22ms/step - loss: 0.4978 - accuracy: 0.8401 - val_loss: 0.6771 - val_accuracy: 0.8177
Epoch 52/100
283/283 [==============================] - ETA: 0s - loss: 0.5290 - accuracy: 0.8333
Epoch 52: val_accuracy improved from 0.83467 to 0.83600, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5290 - accuracy: 0.8333 - val_loss: 0.6092 - val_accuracy: 0.8360
Epoch 53/100
281/283 [============================>.] - ETA: 0s - loss: 0.5046 - accuracy: 0.8382
Epoch 53: val_accuracy did not improve from 0.83600
283/283 [==============================] - 6s 22ms/step - loss: 0.5043 - accuracy: 0.8383 - val_loss: 0.6377 - val_accuracy: 0.8300
Epoch 54/100
282/283 [============================>.] - ETA: 0s - loss: 0.4947 - accuracy: 0.8425
Epoch 54: val_accuracy did not improve from 0.83600
283/283 [==============================] - 6s 21ms/step - loss: 0.4953 - accuracy: 0.8424 - val_loss: 0.6552 - val_accuracy: 0.8243
Epoch 55/100
282/283 [============================>.] - ETA: 0s - loss: 0.4841 - accuracy: 0.8470
Epoch 55: val_accuracy did not improve from 0.83600
283/283 [==============================] - 6s 22ms/step - loss: 0.4831 - accuracy: 0.8474 - val_loss: 0.7037 - val_accuracy: 0.8143
Epoch 56/100
282/283 [============================>.] - ETA: 0s - loss: 0.4893 - accuracy: 0.8492
Epoch 56: val_accuracy did not improve from 0.83600
283/283 [==============================] - 6s 22ms/step - loss: 0.4901 - accuracy: 0.8489 - val_loss: 0.7363 - val_accuracy: 0.8133
Epoch 57/100
282/283 [============================>.] - ETA: 0s - loss: 0.4709 - accuracy: 0.8532
Epoch 57: val_accuracy improved from 0.83600 to 0.84133, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4716 - accuracy: 0.8531 - val_loss: 0.6100 - val_accuracy: 0.8413
Epoch 58/100
282/283 [============================>.] - ETA: 0s - loss: 0.4684 - accuracy: 0.8510
Epoch 58: val_accuracy improved from 0.84133 to 0.84967, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4686 - accuracy: 0.8510 - val_loss: 0.5989 - val_accuracy: 0.8497
Epoch 59/100
282/283 [============================>.] - ETA: 0s - loss: 0.4992 - accuracy: 0.8448
Epoch 59: val_accuracy did not improve from 0.84967
283/283 [==============================] - 6s 22ms/step - loss: 0.4987 - accuracy: 0.8447 - val_loss: 0.6438 - val_accuracy: 0.8277
Epoch 60/100
283/283 [==============================] - ETA: 0s - loss: 0.4681 - accuracy: 0.8536
Epoch 60: val_accuracy did not improve from 0.84967
283/283 [==============================] - 6s 22ms/step - loss: 0.4681 - accuracy: 0.8536 - val_loss: 0.6624 - val_accuracy: 0.8203
Epoch 61/100
283/283 [==============================] - ETA: 0s - loss: 0.4440 - accuracy: 0.8576
Epoch 61: val_accuracy did not improve from 0.84967
283/283 [==============================] - 6s 21ms/step - loss: 0.4440 - accuracy: 0.8576 - val_loss: 0.6202 - val_accuracy: 0.8450
Epoch 62/100
283/283 [==============================] - ETA: 0s - loss: 0.4655 - accuracy: 0.8587
Epoch 62: val_accuracy did not improve from 0.84967
283/283 [==============================] - 6s 22ms/step - loss: 0.4655 - accuracy: 0.8587 - val_loss: 0.6460 - val_accuracy: 0.8367
Epoch 63/100
280/283 [============================>.] - ETA: 0s - loss: 0.4433 - accuracy: 0.8596
Epoch 63: val_accuracy improved from 0.84967 to 0.85867, saving model to ./best_model\100_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.4426 - accuracy: 0.8599 - val_loss: 0.5717 - val_accuracy: 0.8587
Epoch 64/100
282/283 [============================>.] - ETA: 0s - loss: 0.4616 - accuracy: 0.8542
Epoch 64: val_accuracy did not improve from 0.85867
283/283 [==============================] - 7s 23ms/step - loss: 0.4627 - accuracy: 0.8541 - val_loss: 0.6156 - val_accuracy: 0.8370
Epoch 65/100
282/283 [============================>.] - ETA: 0s - loss: 0.4687 - accuracy: 0.8517
Epoch 65: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 21ms/step - loss: 0.4692 - accuracy: 0.8514 - val_loss: 0.6308 - val_accuracy: 0.8427
Epoch 66/100
282/283 [============================>.] - ETA: 0s - loss: 0.4475 - accuracy: 0.8648
Epoch 66: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 22ms/step - loss: 0.4478 - accuracy: 0.8648 - val_loss: 0.6598 - val_accuracy: 0.8343
Epoch 67/100
282/283 [============================>.] - ETA: 0s - loss: 0.4305 - accuracy: 0.8648
Epoch 67: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 22ms/step - loss: 0.4296 - accuracy: 0.8651 - val_loss: 0.7738 - val_accuracy: 0.8113
Epoch 68/100
282/283 [============================>.] - ETA: 0s - loss: 0.4376 - accuracy: 0.8665
Epoch 68: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 21ms/step - loss: 0.4371 - accuracy: 0.8666 - val_loss: 0.7226 - val_accuracy: 0.8187
Epoch 69/100
282/283 [============================>.] - ETA: 0s - loss: 0.4489 - accuracy: 0.8647
Epoch 69: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 21ms/step - loss: 0.4487 - accuracy: 0.8644 - val_loss: 0.7047 - val_accuracy: 0.8230
Epoch 70/100
281/283 [============================>.] - ETA: 0s - loss: 0.4200 - accuracy: 0.8684
Epoch 70: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 22ms/step - loss: 0.4215 - accuracy: 0.8679 - val_loss: 0.6169 - val_accuracy: 0.8453
Epoch 71/100
282/283 [============================>.] - ETA: 0s - loss: 0.4030 - accuracy: 0.8746
Epoch 71: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 22ms/step - loss: 0.4045 - accuracy: 0.8744 - val_loss: 0.7007 - val_accuracy: 0.8167
Epoch 72/100
282/283 [============================>.] - ETA: 0s - loss: 0.4133 - accuracy: 0.8695
Epoch 72: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 22ms/step - loss: 0.4136 - accuracy: 0.8693 - val_loss: 0.6556 - val_accuracy: 0.8283
Epoch 73/100
283/283 [==============================] - ETA: 0s - loss: 0.4437 - accuracy: 0.8611Restoring model weights from the end of the best epoch: 63.

Epoch 73: val_accuracy did not improve from 0.85867
283/283 [==============================] - 6s 21ms/step - loss: 0.4437 - accuracy: 0.8611 - val_loss: 0.6602 - val_accuracy: 0.8393
Epoch 73: early stopping
CNN Error: 14.07%
No description has been provided for this image
Model: "sequential_15"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_37 (Conv2D)          (None, 33, 33, 64)        1664      
                                                                 
 max_pooling2d_37 (MaxPoolin  (None, 16, 16, 64)       0         
 g2D)                                                            
                                                                 
 conv2d_38 (Conv2D)          (None, 14, 14, 128)       73856     
                                                                 
 max_pooling2d_38 (MaxPoolin  (None, 7, 7, 128)        0         
 g2D)                                                            
                                                                 
 conv2d_39 (Conv2D)          (None, 5, 5, 128)         147584    
                                                                 
 max_pooling2d_39 (MaxPoolin  (None, 2, 2, 128)        0         
 g2D)                                                            
                                                                 
 flatten_15 (Flatten)        (None, 512)               0         
                                                                 
 dense_43 (Dense)            (None, 256)               131328    
                                                                 
 dropout_28 (Dropout)        (None, 256)               0         
                                                                 
 dense_44 (Dense)            (None, 128)               32896     
                                                                 
 dropout_29 (Dropout)        (None, 128)               0         
                                                                 
 dense_45 (Dense)            (None, 15)                1935      
                                                                 
=================================================================
Total params: 389,263
Trainable params: 389,263
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 1s 12ms/step
No description has been provided for this image
In [36]:
# fix random seed for reproducibility
seed = 88
np.random.seed(seed)

# create model
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(131, 131, 1), activation='relu',kernel_regularizer=l2(0.0001)))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256, activation='relu',kernel_regularizer=l2(0.0001)))
model.add(Dropout(0.25))

model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))

model.add(Dense(num_classes, activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)

# Fit the model with early stopping and model checkpoint
history = model.fit(
    train131, 
    validation_data=val131, 
    epochs=100, 
    batch_size=200, 
    verbose=1, 
    class_weight=class_weights131, 
    callbacks=[early_stopping]
)


# Final evaluation of the model
scores = model.evaluate(test131, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))

plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (131x131)')
plt.plot(history.history['val_accuracy'], label='Validation (131x131)')

plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

model.summary()

# Predict the output on the test set
predictions = model.predict(test131, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test131.classes

# Get the label to class mapping from the generator
class_labels = list(test131.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/100
283/283 [==============================] - ETA: 0s - loss: 2.6323 - accuracy: 0.1173
Epoch 1: val_accuracy improved from -inf to 0.19900, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 21s 71ms/step - loss: 2.6323 - accuracy: 0.1173 - val_loss: 2.3674 - val_accuracy: 0.1990
Epoch 2/100
283/283 [==============================] - ETA: 0s - loss: 2.3528 - accuracy: 0.2153
Epoch 2: val_accuracy improved from 0.19900 to 0.31067, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 70ms/step - loss: 2.3528 - accuracy: 0.2153 - val_loss: 2.1335 - val_accuracy: 0.3107
Epoch 3/100
283/283 [==============================] - ETA: 0s - loss: 1.9582 - accuracy: 0.3790
Epoch 3: val_accuracy improved from 0.31067 to 0.46333, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 71ms/step - loss: 1.9582 - accuracy: 0.3790 - val_loss: 1.7167 - val_accuracy: 0.4633
Epoch 4/100
283/283 [==============================] - ETA: 0s - loss: 1.6206 - accuracy: 0.4934
Epoch 4: val_accuracy improved from 0.46333 to 0.62167, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 71ms/step - loss: 1.6206 - accuracy: 0.4934 - val_loss: 1.2563 - val_accuracy: 0.6217
Epoch 5/100
283/283 [==============================] - ETA: 0s - loss: 1.3681 - accuracy: 0.5813
Epoch 5: val_accuracy improved from 0.62167 to 0.71900, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 71ms/step - loss: 1.3681 - accuracy: 0.5813 - val_loss: 0.9635 - val_accuracy: 0.7190
Epoch 6/100
283/283 [==============================] - ETA: 0s - loss: 1.2125 - accuracy: 0.6320
Epoch 6: val_accuracy improved from 0.71900 to 0.72667, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 1.2125 - accuracy: 0.6320 - val_loss: 0.9936 - val_accuracy: 0.7267
Epoch 7/100
283/283 [==============================] - ETA: 0s - loss: 1.0609 - accuracy: 0.6903
Epoch 7: val_accuracy improved from 0.72667 to 0.77833, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 1.0609 - accuracy: 0.6903 - val_loss: 0.8277 - val_accuracy: 0.7783
Epoch 8/100
283/283 [==============================] - ETA: 0s - loss: 0.9809 - accuracy: 0.7172
Epoch 8: val_accuracy did not improve from 0.77833
283/283 [==============================] - 20s 71ms/step - loss: 0.9809 - accuracy: 0.7172 - val_loss: 0.8754 - val_accuracy: 0.7683
Epoch 9/100
283/283 [==============================] - ETA: 0s - loss: 0.9319 - accuracy: 0.7406
Epoch 9: val_accuracy improved from 0.77833 to 0.85000, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 0.9319 - accuracy: 0.7406 - val_loss: 0.6785 - val_accuracy: 0.8500
Epoch 10/100
283/283 [==============================] - ETA: 0s - loss: 0.8486 - accuracy: 0.7697
Epoch 10: val_accuracy improved from 0.85000 to 0.85433, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 0.8486 - accuracy: 0.7697 - val_loss: 0.6671 - val_accuracy: 0.8543
Epoch 11/100
283/283 [==============================] - ETA: 0s - loss: 0.8208 - accuracy: 0.7828
Epoch 11: val_accuracy did not improve from 0.85433
283/283 [==============================] - 20s 71ms/step - loss: 0.8208 - accuracy: 0.7828 - val_loss: 0.7163 - val_accuracy: 0.8470
Epoch 12/100
283/283 [==============================] - ETA: 0s - loss: 0.7333 - accuracy: 0.8109
Epoch 12: val_accuracy improved from 0.85433 to 0.86167, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 0.7333 - accuracy: 0.8109 - val_loss: 0.6529 - val_accuracy: 0.8617
Epoch 13/100
283/283 [==============================] - ETA: 0s - loss: 0.7337 - accuracy: 0.8144
Epoch 13: val_accuracy improved from 0.86167 to 0.88067, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 0.7337 - accuracy: 0.8144 - val_loss: 0.6002 - val_accuracy: 0.8807
Epoch 14/100
283/283 [==============================] - ETA: 0s - loss: 0.6735 - accuracy: 0.8353
Epoch 14: val_accuracy did not improve from 0.88067
283/283 [==============================] - 20s 70ms/step - loss: 0.6735 - accuracy: 0.8353 - val_loss: 0.8334 - val_accuracy: 0.8133
Epoch 15/100
283/283 [==============================] - ETA: 0s - loss: 0.6480 - accuracy: 0.8471
Epoch 15: val_accuracy did not improve from 0.88067
283/283 [==============================] - 20s 70ms/step - loss: 0.6480 - accuracy: 0.8471 - val_loss: 0.6242 - val_accuracy: 0.8670
Epoch 16/100
283/283 [==============================] - ETA: 0s - loss: 0.6495 - accuracy: 0.8456
Epoch 16: val_accuracy improved from 0.88067 to 0.88767, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 59s 210ms/step - loss: 0.6495 - accuracy: 0.8456 - val_loss: 0.5680 - val_accuracy: 0.8877
Epoch 17/100
283/283 [==============================] - ETA: 0s - loss: 0.6931 - accuracy: 0.8389
Epoch 17: val_accuracy improved from 0.88767 to 0.89833, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 69s 244ms/step - loss: 0.6931 - accuracy: 0.8389 - val_loss: 0.5546 - val_accuracy: 0.8983
Epoch 18/100
283/283 [==============================] - ETA: 0s - loss: 0.5744 - accuracy: 0.8766
Epoch 18: val_accuracy did not improve from 0.89833
283/283 [==============================] - 69s 243ms/step - loss: 0.5744 - accuracy: 0.8766 - val_loss: 0.5577 - val_accuracy: 0.8940
Epoch 19/100
283/283 [==============================] - ETA: 0s - loss: 0.5982 - accuracy: 0.8710
Epoch 19: val_accuracy improved from 0.89833 to 0.90233, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 69s 243ms/step - loss: 0.5982 - accuracy: 0.8710 - val_loss: 0.5580 - val_accuracy: 0.9023
Epoch 20/100
283/283 [==============================] - ETA: 0s - loss: 0.5697 - accuracy: 0.8799
Epoch 20: val_accuracy did not improve from 0.90233
283/283 [==============================] - 68s 241ms/step - loss: 0.5697 - accuracy: 0.8799 - val_loss: 0.5451 - val_accuracy: 0.9017
Epoch 21/100
283/283 [==============================] - ETA: 0s - loss: 0.5687 - accuracy: 0.8789
Epoch 21: val_accuracy improved from 0.90233 to 0.90567, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 68s 240ms/step - loss: 0.5687 - accuracy: 0.8789 - val_loss: 0.5232 - val_accuracy: 0.9057
Epoch 22/100
283/283 [==============================] - ETA: 0s - loss: 0.5553 - accuracy: 0.8850
Epoch 22: val_accuracy improved from 0.90567 to 0.91100, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 68s 240ms/step - loss: 0.5553 - accuracy: 0.8850 - val_loss: 0.5033 - val_accuracy: 0.9110
Epoch 23/100
283/283 [==============================] - ETA: 0s - loss: 0.5635 - accuracy: 0.8840
Epoch 23: val_accuracy did not improve from 0.91100
283/283 [==============================] - 43s 150ms/step - loss: 0.5635 - accuracy: 0.8840 - val_loss: 0.5657 - val_accuracy: 0.8947
Epoch 24/100
283/283 [==============================] - ETA: 0s - loss: 0.5495 - accuracy: 0.8907
Epoch 24: val_accuracy did not improve from 0.91100
283/283 [==============================] - 19s 69ms/step - loss: 0.5495 - accuracy: 0.8907 - val_loss: 0.5483 - val_accuracy: 0.9060
Epoch 25/100
283/283 [==============================] - ETA: 0s - loss: 0.5404 - accuracy: 0.8951
Epoch 25: val_accuracy did not improve from 0.91100
283/283 [==============================] - 20s 69ms/step - loss: 0.5404 - accuracy: 0.8951 - val_loss: 0.5574 - val_accuracy: 0.9043
Epoch 26/100
283/283 [==============================] - ETA: 0s - loss: 0.5296 - accuracy: 0.9009
Epoch 26: val_accuracy did not improve from 0.91100
283/283 [==============================] - 20s 69ms/step - loss: 0.5296 - accuracy: 0.9009 - val_loss: 0.5525 - val_accuracy: 0.9033
Epoch 27/100
283/283 [==============================] - ETA: 0s - loss: 0.5040 - accuracy: 0.9055
Epoch 27: val_accuracy did not improve from 0.91100
283/283 [==============================] - 56s 199ms/step - loss: 0.5040 - accuracy: 0.9055 - val_loss: 0.5409 - val_accuracy: 0.9093
Epoch 28/100
283/283 [==============================] - ETA: 0s - loss: 0.5067 - accuracy: 0.9019
Epoch 28: val_accuracy did not improve from 0.91100
283/283 [==============================] - 20s 69ms/step - loss: 0.5067 - accuracy: 0.9019 - val_loss: 0.5725 - val_accuracy: 0.8990
Epoch 29/100
283/283 [==============================] - ETA: 0s - loss: 0.5185 - accuracy: 0.9012
Epoch 29: val_accuracy improved from 0.91100 to 0.91267, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 70ms/step - loss: 0.5185 - accuracy: 0.9012 - val_loss: 0.5501 - val_accuracy: 0.9127
Epoch 30/100
283/283 [==============================] - ETA: 0s - loss: 0.4918 - accuracy: 0.9125
Epoch 30: val_accuracy improved from 0.91267 to 0.91500, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 26s 92ms/step - loss: 0.4918 - accuracy: 0.9125 - val_loss: 0.5354 - val_accuracy: 0.9150
Epoch 31/100
283/283 [==============================] - ETA: 0s - loss: 0.4977 - accuracy: 0.9130
Epoch 31: val_accuracy improved from 0.91500 to 0.91733, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 47s 164ms/step - loss: 0.4977 - accuracy: 0.9130 - val_loss: 0.5477 - val_accuracy: 0.9173
Epoch 32/100
283/283 [==============================] - ETA: 0s - loss: 0.4748 - accuracy: 0.9136
Epoch 32: val_accuracy did not improve from 0.91733
283/283 [==============================] - 20s 69ms/step - loss: 0.4748 - accuracy: 0.9136 - val_loss: 0.5325 - val_accuracy: 0.9137
Epoch 33/100
283/283 [==============================] - ETA: 0s - loss: 0.4708 - accuracy: 0.9210
Epoch 33: val_accuracy did not improve from 0.91733
283/283 [==============================] - 20s 69ms/step - loss: 0.4708 - accuracy: 0.9210 - val_loss: 0.5414 - val_accuracy: 0.9140
Epoch 34/100
283/283 [==============================] - ETA: 0s - loss: 0.5099 - accuracy: 0.9085
Epoch 34: val_accuracy did not improve from 0.91733
283/283 [==============================] - 20s 69ms/step - loss: 0.5099 - accuracy: 0.9085 - val_loss: 0.5329 - val_accuracy: 0.9160
Epoch 35/100
283/283 [==============================] - ETA: 0s - loss: 0.4684 - accuracy: 0.9207
Epoch 35: val_accuracy did not improve from 0.91733
283/283 [==============================] - 20s 69ms/step - loss: 0.4684 - accuracy: 0.9207 - val_loss: 0.6280 - val_accuracy: 0.9000
Epoch 36/100
283/283 [==============================] - ETA: 0s - loss: 0.5150 - accuracy: 0.9114
Epoch 36: val_accuracy improved from 0.91733 to 0.92067, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 70ms/step - loss: 0.5150 - accuracy: 0.9114 - val_loss: 0.5235 - val_accuracy: 0.9207
Epoch 37/100
283/283 [==============================] - ETA: 0s - loss: 0.4674 - accuracy: 0.9242
Epoch 37: val_accuracy improved from 0.92067 to 0.92900, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 70ms/step - loss: 0.4674 - accuracy: 0.9242 - val_loss: 0.4985 - val_accuracy: 0.9290
Epoch 38/100
283/283 [==============================] - ETA: 0s - loss: 0.5001 - accuracy: 0.9150
Epoch 38: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 69ms/step - loss: 0.5001 - accuracy: 0.9150 - val_loss: 0.5185 - val_accuracy: 0.9213
Epoch 39/100
283/283 [==============================] - ETA: 0s - loss: 0.4564 - accuracy: 0.9300
Epoch 39: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 69ms/step - loss: 0.4564 - accuracy: 0.9300 - val_loss: 0.5086 - val_accuracy: 0.9267
Epoch 40/100
283/283 [==============================] - ETA: 0s - loss: 0.4570 - accuracy: 0.9276
Epoch 40: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 69ms/step - loss: 0.4570 - accuracy: 0.9276 - val_loss: 0.5135 - val_accuracy: 0.9283
Epoch 41/100
283/283 [==============================] - ETA: 0s - loss: 0.4496 - accuracy: 0.9292
Epoch 41: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 69ms/step - loss: 0.4496 - accuracy: 0.9292 - val_loss: 0.5659 - val_accuracy: 0.9090
Epoch 42/100
283/283 [==============================] - ETA: 0s - loss: 0.4359 - accuracy: 0.9340
Epoch 42: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 69ms/step - loss: 0.4359 - accuracy: 0.9340 - val_loss: 0.6172 - val_accuracy: 0.9117
Epoch 43/100
283/283 [==============================] - ETA: 0s - loss: 0.4403 - accuracy: 0.9339
Epoch 43: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 70ms/step - loss: 0.4403 - accuracy: 0.9339 - val_loss: 0.5036 - val_accuracy: 0.9287
Epoch 44/100
283/283 [==============================] - ETA: 0s - loss: 0.4657 - accuracy: 0.9268
Epoch 44: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 69ms/step - loss: 0.4657 - accuracy: 0.9268 - val_loss: 0.5594 - val_accuracy: 0.9163
Epoch 45/100
283/283 [==============================] - ETA: 0s - loss: 0.4493 - accuracy: 0.9297
Epoch 45: val_accuracy did not improve from 0.92900
283/283 [==============================] - 20s 70ms/step - loss: 0.4493 - accuracy: 0.9297 - val_loss: 0.4992 - val_accuracy: 0.9283
Epoch 46/100
283/283 [==============================] - ETA: 0s - loss: 0.4345 - accuracy: 0.9330
Epoch 46: val_accuracy improved from 0.92900 to 0.93433, saving model to ./best_model\100_131x131.h5
283/283 [==============================] - 20s 70ms/step - loss: 0.4345 - accuracy: 0.9330 - val_loss: 0.4730 - val_accuracy: 0.9343
Epoch 47/100
283/283 [==============================] - ETA: 0s - loss: 0.4574 - accuracy: 0.9319
Epoch 47: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 69ms/step - loss: 0.4574 - accuracy: 0.9319 - val_loss: 0.4877 - val_accuracy: 0.9310
Epoch 48/100
283/283 [==============================] - ETA: 0s - loss: 0.4154 - accuracy: 0.9414
Epoch 48: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4154 - accuracy: 0.9414 - val_loss: 0.4936 - val_accuracy: 0.9270
Epoch 49/100
283/283 [==============================] - ETA: 0s - loss: 0.4295 - accuracy: 0.9377
Epoch 49: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4295 - accuracy: 0.9377 - val_loss: 0.5448 - val_accuracy: 0.9193
Epoch 50/100
283/283 [==============================] - ETA: 0s - loss: 0.4332 - accuracy: 0.9386
Epoch 50: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4332 - accuracy: 0.9386 - val_loss: 0.5278 - val_accuracy: 0.9220
Epoch 51/100
283/283 [==============================] - ETA: 0s - loss: 0.4376 - accuracy: 0.9377
Epoch 51: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4376 - accuracy: 0.9377 - val_loss: 0.5224 - val_accuracy: 0.9240
Epoch 52/100
283/283 [==============================] - ETA: 0s - loss: 0.4195 - accuracy: 0.9426
Epoch 52: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4195 - accuracy: 0.9426 - val_loss: 0.5308 - val_accuracy: 0.9233
Epoch 53/100
283/283 [==============================] - ETA: 0s - loss: 0.4277 - accuracy: 0.9406
Epoch 53: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4277 - accuracy: 0.9406 - val_loss: 0.5010 - val_accuracy: 0.9277
Epoch 54/100
283/283 [==============================] - ETA: 0s - loss: 0.4237 - accuracy: 0.9370
Epoch 54: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4237 - accuracy: 0.9370 - val_loss: 0.5005 - val_accuracy: 0.9250
Epoch 55/100
283/283 [==============================] - ETA: 0s - loss: 0.4493 - accuracy: 0.9346
Epoch 55: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4493 - accuracy: 0.9346 - val_loss: 0.5568 - val_accuracy: 0.9173
Epoch 56/100
283/283 [==============================] - ETA: 0s - loss: 0.4310 - accuracy: 0.9391Restoring model weights from the end of the best epoch: 46.

Epoch 56: val_accuracy did not improve from 0.93433
283/283 [==============================] - 20s 70ms/step - loss: 0.4310 - accuracy: 0.9391 - val_loss: 0.5087 - val_accuracy: 0.9323
Epoch 56: early stopping
CNN Error: 6.37%
No description has been provided for this image
Model: "sequential_16"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_40 (Conv2D)          (None, 127, 127, 64)      1664      
                                                                 
 max_pooling2d_40 (MaxPoolin  (None, 63, 63, 64)       0         
 g2D)                                                            
                                                                 
 conv2d_41 (Conv2D)          (None, 61, 61, 128)       73856     
                                                                 
 max_pooling2d_41 (MaxPoolin  (None, 30, 30, 128)      0         
 g2D)                                                            
                                                                 
 conv2d_42 (Conv2D)          (None, 28, 28, 128)       147584    
                                                                 
 max_pooling2d_42 (MaxPoolin  (None, 14, 14, 128)      0         
 g2D)                                                            
                                                                 
 flatten_16 (Flatten)        (None, 25088)             0         
                                                                 
 dense_46 (Dense)            (None, 256)               6422784   
                                                                 
 dropout_30 (Dropout)        (None, 256)               0         
                                                                 
 dense_47 (Dense)            (None, 128)               32896     
                                                                 
 dropout_31 (Dropout)        (None, 128)               0         
                                                                 
 dense_48 (Dense)            (None, 15)                1935      
                                                                 
=================================================================
Total params: 6,680,719
Trainable params: 6,680,719
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 2s 18ms/step
No description has been provided for this image

Increasing Epochs from 100 to 200

In [37]:
# Fix random seed for reproducibility
seed = 88
np.random.seed(seed)

# Create the model
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(37, 37, 1), activation='relu', kernel_regularizer=l2(0.0001)))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256, activation='relu', kernel_regularizer=l2(0.0001)))
model.add(Dropout(0.25))

model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))

model.add(Dense(num_classes, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)

# Fit the model with early stopping and model checkpoint
history = model.fit(
    train37, 
    validation_data=val37, 
    epochs=200, 
    batch_size=200, 
    verbose=1, 
    class_weight=class_weights37, 
    callbacks=[early_stopping]
)


# Evaluate the best model on the test set
scores = model.evaluate(test37, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))

# Plot accuracy
plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (37x37)')
plt.plot(history.history['val_accuracy'], label='Validation (37x37)')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

# Print model summary
model.summary()

# Predict the output on the test set using the best model
predictions = model.predict(test37, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test37.classes

# Get the label to class mapping from the generator
class_labels = list(test37.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/200
282/283 [============================>.] - ETA: 0s - loss: 2.7215 - accuracy: 0.0630
Epoch 1: val_accuracy improved from -inf to 0.13533, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 11s 37ms/step - loss: 2.7215 - accuracy: 0.0629 - val_loss: 2.6469 - val_accuracy: 0.1353
Epoch 2/200
282/283 [============================>.] - ETA: 0s - loss: 2.5652 - accuracy: 0.1491
Epoch 2: val_accuracy improved from 0.13533 to 0.22600, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.5649 - accuracy: 0.1492 - val_loss: 2.3680 - val_accuracy: 0.2260
Epoch 3/200
283/283 [==============================] - ETA: 0s - loss: 2.3739 - accuracy: 0.2315
Epoch 3: val_accuracy improved from 0.22600 to 0.26700, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 23ms/step - loss: 2.3739 - accuracy: 0.2315 - val_loss: 2.2274 - val_accuracy: 0.2670
Epoch 4/200
281/283 [============================>.] - ETA: 0s - loss: 2.2415 - accuracy: 0.2703
Epoch 4: val_accuracy improved from 0.26700 to 0.34733, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.2388 - accuracy: 0.2706 - val_loss: 2.0422 - val_accuracy: 0.3473
Epoch 5/200
281/283 [============================>.] - ETA: 0s - loss: 2.1117 - accuracy: 0.3157
Epoch 5: val_accuracy improved from 0.34733 to 0.37367, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 2.1113 - accuracy: 0.3163 - val_loss: 1.9174 - val_accuracy: 0.3737
Epoch 6/200
281/283 [============================>.] - ETA: 0s - loss: 1.9610 - accuracy: 0.3618
Epoch 6: val_accuracy improved from 0.37367 to 0.42733, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.9610 - accuracy: 0.3621 - val_loss: 1.8213 - val_accuracy: 0.4273
Epoch 7/200
283/283 [==============================] - ETA: 0s - loss: 1.8652 - accuracy: 0.3980
Epoch 7: val_accuracy improved from 0.42733 to 0.47900, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.8652 - accuracy: 0.3980 - val_loss: 1.6815 - val_accuracy: 0.4790
Epoch 8/200
281/283 [============================>.] - ETA: 0s - loss: 1.7458 - accuracy: 0.4422
Epoch 8: val_accuracy improved from 0.47900 to 0.51867, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.7466 - accuracy: 0.4421 - val_loss: 1.5446 - val_accuracy: 0.5187
Epoch 9/200
283/283 [==============================] - ETA: 0s - loss: 1.6729 - accuracy: 0.4537
Epoch 9: val_accuracy improved from 0.51867 to 0.53267, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.6729 - accuracy: 0.4537 - val_loss: 1.5107 - val_accuracy: 0.5327
Epoch 10/200
281/283 [============================>.] - ETA: 0s - loss: 1.5641 - accuracy: 0.4938
Epoch 10: val_accuracy improved from 0.53267 to 0.56500, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.5624 - accuracy: 0.4937 - val_loss: 1.4349 - val_accuracy: 0.5650
Epoch 11/200
281/283 [============================>.] - ETA: 0s - loss: 1.4927 - accuracy: 0.5174
Epoch 11: val_accuracy improved from 0.56500 to 0.58800, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.4931 - accuracy: 0.5175 - val_loss: 1.3362 - val_accuracy: 0.5880
Epoch 12/200
283/283 [==============================] - ETA: 0s - loss: 1.4373 - accuracy: 0.5331
Epoch 12: val_accuracy did not improve from 0.58800
283/283 [==============================] - 6s 22ms/step - loss: 1.4373 - accuracy: 0.5331 - val_loss: 1.3664 - val_accuracy: 0.5797
Epoch 13/200
281/283 [============================>.] - ETA: 0s - loss: 1.3565 - accuracy: 0.5620
Epoch 13: val_accuracy improved from 0.58800 to 0.61667, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.3550 - accuracy: 0.5625 - val_loss: 1.2701 - val_accuracy: 0.6167
Epoch 14/200
281/283 [============================>.] - ETA: 0s - loss: 1.3254 - accuracy: 0.5740
Epoch 14: val_accuracy improved from 0.61667 to 0.63500, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.3242 - accuracy: 0.5744 - val_loss: 1.2037 - val_accuracy: 0.6350
Epoch 15/200
281/283 [============================>.] - ETA: 0s - loss: 1.2780 - accuracy: 0.5894
Epoch 15: val_accuracy improved from 0.63500 to 0.65233, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.2780 - accuracy: 0.5892 - val_loss: 1.1533 - val_accuracy: 0.6523
Epoch 16/200
282/283 [============================>.] - ETA: 0s - loss: 1.2229 - accuracy: 0.6026
Epoch 16: val_accuracy improved from 0.65233 to 0.65433, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.2229 - accuracy: 0.6023 - val_loss: 1.1724 - val_accuracy: 0.6543
Epoch 17/200
282/283 [============================>.] - ETA: 0s - loss: 1.1763 - accuracy: 0.6174
Epoch 17: val_accuracy improved from 0.65433 to 0.66333, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.1758 - accuracy: 0.6177 - val_loss: 1.1176 - val_accuracy: 0.6633
Epoch 18/200
282/283 [============================>.] - ETA: 0s - loss: 1.1629 - accuracy: 0.6269
Epoch 18: val_accuracy did not improve from 0.66333
283/283 [==============================] - 6s 21ms/step - loss: 1.1643 - accuracy: 0.6262 - val_loss: 1.1058 - val_accuracy: 0.6557
Epoch 19/200
281/283 [============================>.] - ETA: 0s - loss: 1.1472 - accuracy: 0.6236
Epoch 19: val_accuracy did not improve from 0.66333
283/283 [==============================] - 6s 22ms/step - loss: 1.1490 - accuracy: 0.6233 - val_loss: 1.3395 - val_accuracy: 0.5960
Epoch 20/200
281/283 [============================>.] - ETA: 0s - loss: 1.1213 - accuracy: 0.6277
Epoch 20: val_accuracy improved from 0.66333 to 0.69133, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.1223 - accuracy: 0.6279 - val_loss: 1.0394 - val_accuracy: 0.6913
Epoch 21/200
281/283 [============================>.] - ETA: 0s - loss: 1.0372 - accuracy: 0.6601
Epoch 21: val_accuracy improved from 0.69133 to 0.69267, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.0366 - accuracy: 0.6602 - val_loss: 0.9985 - val_accuracy: 0.6927
Epoch 22/200
282/283 [============================>.] - ETA: 0s - loss: 1.0140 - accuracy: 0.6742
Epoch 22: val_accuracy improved from 0.69267 to 0.71367, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 1.0139 - accuracy: 0.6743 - val_loss: 0.9358 - val_accuracy: 0.7137
Epoch 23/200
281/283 [============================>.] - ETA: 0s - loss: 1.0005 - accuracy: 0.6681
Epoch 23: val_accuracy did not improve from 0.71367
283/283 [==============================] - 6s 23ms/step - loss: 1.0007 - accuracy: 0.6677 - val_loss: 0.9626 - val_accuracy: 0.7133
Epoch 24/200
282/283 [============================>.] - ETA: 0s - loss: 0.9743 - accuracy: 0.6763
Epoch 24: val_accuracy did not improve from 0.71367
283/283 [==============================] - 6s 22ms/step - loss: 0.9760 - accuracy: 0.6762 - val_loss: 1.0271 - val_accuracy: 0.6993
Epoch 25/200
283/283 [==============================] - ETA: 0s - loss: 0.9564 - accuracy: 0.6817
Epoch 25: val_accuracy improved from 0.71367 to 0.72200, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.9564 - accuracy: 0.6817 - val_loss: 0.9419 - val_accuracy: 0.7220
Epoch 26/200
283/283 [==============================] - ETA: 0s - loss: 0.9301 - accuracy: 0.6942
Epoch 26: val_accuracy improved from 0.72200 to 0.73433, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.9301 - accuracy: 0.6942 - val_loss: 0.9213 - val_accuracy: 0.7343
Epoch 27/200
281/283 [============================>.] - ETA: 0s - loss: 0.9029 - accuracy: 0.6999
Epoch 27: val_accuracy did not improve from 0.73433
283/283 [==============================] - 6s 22ms/step - loss: 0.9019 - accuracy: 0.7000 - val_loss: 0.9100 - val_accuracy: 0.7287
Epoch 28/200
281/283 [============================>.] - ETA: 0s - loss: 0.8770 - accuracy: 0.7162
Epoch 28: val_accuracy did not improve from 0.73433
283/283 [==============================] - 6s 22ms/step - loss: 0.8777 - accuracy: 0.7159 - val_loss: 0.9570 - val_accuracy: 0.7243
Epoch 29/200
283/283 [==============================] - ETA: 0s - loss: 0.8791 - accuracy: 0.7169
Epoch 29: val_accuracy improved from 0.73433 to 0.75467, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8791 - accuracy: 0.7169 - val_loss: 0.8573 - val_accuracy: 0.7547
Epoch 30/200
283/283 [==============================] - ETA: 0s - loss: 0.8600 - accuracy: 0.7165
Epoch 30: val_accuracy did not improve from 0.75467
283/283 [==============================] - 6s 22ms/step - loss: 0.8600 - accuracy: 0.7165 - val_loss: 1.1894 - val_accuracy: 0.6427
Epoch 31/200
281/283 [============================>.] - ETA: 0s - loss: 0.8410 - accuracy: 0.7241
Epoch 31: val_accuracy did not improve from 0.75467
283/283 [==============================] - 6s 22ms/step - loss: 0.8388 - accuracy: 0.7249 - val_loss: 0.9332 - val_accuracy: 0.7330
Epoch 32/200
283/283 [==============================] - ETA: 0s - loss: 0.8379 - accuracy: 0.7275
Epoch 32: val_accuracy improved from 0.75467 to 0.76967, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8379 - accuracy: 0.7275 - val_loss: 0.8217 - val_accuracy: 0.7697
Epoch 33/200
281/283 [============================>.] - ETA: 0s - loss: 0.8057 - accuracy: 0.7363
Epoch 33: val_accuracy did not improve from 0.76967
283/283 [==============================] - 6s 22ms/step - loss: 0.8049 - accuracy: 0.7367 - val_loss: 0.8112 - val_accuracy: 0.7600
Epoch 34/200
283/283 [==============================] - ETA: 0s - loss: 0.8166 - accuracy: 0.7345
Epoch 34: val_accuracy did not improve from 0.76967
283/283 [==============================] - 6s 22ms/step - loss: 0.8166 - accuracy: 0.7345 - val_loss: 0.9314 - val_accuracy: 0.7297
Epoch 35/200
283/283 [==============================] - ETA: 0s - loss: 0.8058 - accuracy: 0.7362
Epoch 35: val_accuracy improved from 0.76967 to 0.77467, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.8058 - accuracy: 0.7362 - val_loss: 0.7948 - val_accuracy: 0.7747
Epoch 36/200
283/283 [==============================] - ETA: 0s - loss: 0.7556 - accuracy: 0.7540
Epoch 36: val_accuracy improved from 0.77467 to 0.78000, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.7556 - accuracy: 0.7540 - val_loss: 0.7790 - val_accuracy: 0.7800
Epoch 37/200
281/283 [============================>.] - ETA: 0s - loss: 0.7454 - accuracy: 0.7500
Epoch 37: val_accuracy did not improve from 0.78000
283/283 [==============================] - 6s 21ms/step - loss: 0.7460 - accuracy: 0.7503 - val_loss: 0.7886 - val_accuracy: 0.7763
Epoch 38/200
283/283 [==============================] - ETA: 0s - loss: 0.7432 - accuracy: 0.7609
Epoch 38: val_accuracy did not improve from 0.78000
283/283 [==============================] - 6s 22ms/step - loss: 0.7432 - accuracy: 0.7609 - val_loss: 0.8224 - val_accuracy: 0.7713
Epoch 39/200
281/283 [============================>.] - ETA: 0s - loss: 0.7177 - accuracy: 0.7614
Epoch 39: val_accuracy did not improve from 0.78000
283/283 [==============================] - 6s 22ms/step - loss: 0.7193 - accuracy: 0.7612 - val_loss: 0.7936 - val_accuracy: 0.7780
Epoch 40/200
283/283 [==============================] - ETA: 0s - loss: 0.7139 - accuracy: 0.7664
Epoch 40: val_accuracy improved from 0.78000 to 0.79133, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.7139 - accuracy: 0.7664 - val_loss: 0.7509 - val_accuracy: 0.7913
Epoch 41/200
281/283 [============================>.] - ETA: 0s - loss: 0.6955 - accuracy: 0.7730
Epoch 41: val_accuracy did not improve from 0.79133
283/283 [==============================] - 6s 22ms/step - loss: 0.6965 - accuracy: 0.7730 - val_loss: 0.7952 - val_accuracy: 0.7703
Epoch 42/200
282/283 [============================>.] - ETA: 0s - loss: 0.7143 - accuracy: 0.7648
Epoch 42: val_accuracy did not improve from 0.79133
283/283 [==============================] - 6s 22ms/step - loss: 0.7146 - accuracy: 0.7646 - val_loss: 0.7940 - val_accuracy: 0.7760
Epoch 43/200
283/283 [==============================] - ETA: 0s - loss: 0.6774 - accuracy: 0.7772
Epoch 43: val_accuracy did not improve from 0.79133
283/283 [==============================] - 6s 21ms/step - loss: 0.6774 - accuracy: 0.7772 - val_loss: 0.9778 - val_accuracy: 0.7290
Epoch 44/200
281/283 [============================>.] - ETA: 0s - loss: 0.6817 - accuracy: 0.7755
Epoch 44: val_accuracy improved from 0.79133 to 0.79733, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6810 - accuracy: 0.7758 - val_loss: 0.7377 - val_accuracy: 0.7973
Epoch 45/200
281/283 [============================>.] - ETA: 0s - loss: 0.6664 - accuracy: 0.7826
Epoch 45: val_accuracy improved from 0.79733 to 0.80200, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 21ms/step - loss: 0.6669 - accuracy: 0.7820 - val_loss: 0.7074 - val_accuracy: 0.8020
Epoch 46/200
283/283 [==============================] - ETA: 0s - loss: 0.6616 - accuracy: 0.7872
Epoch 46: val_accuracy did not improve from 0.80200
283/283 [==============================] - 6s 21ms/step - loss: 0.6616 - accuracy: 0.7872 - val_loss: 0.8913 - val_accuracy: 0.7533
Epoch 47/200
283/283 [==============================] - ETA: 0s - loss: 0.6834 - accuracy: 0.7717
Epoch 47: val_accuracy did not improve from 0.80200
283/283 [==============================] - 6s 21ms/step - loss: 0.6834 - accuracy: 0.7717 - val_loss: 0.7483 - val_accuracy: 0.7873
Epoch 48/200
282/283 [============================>.] - ETA: 0s - loss: 0.6650 - accuracy: 0.7853
Epoch 48: val_accuracy did not improve from 0.80200
283/283 [==============================] - 6s 21ms/step - loss: 0.6651 - accuracy: 0.7854 - val_loss: 0.7496 - val_accuracy: 0.7907
Epoch 49/200
281/283 [============================>.] - ETA: 0s - loss: 0.6433 - accuracy: 0.7914
Epoch 49: val_accuracy did not improve from 0.80200
283/283 [==============================] - 6s 22ms/step - loss: 0.6430 - accuracy: 0.7914 - val_loss: 0.7465 - val_accuracy: 0.7913
Epoch 50/200
282/283 [============================>.] - ETA: 0s - loss: 0.6212 - accuracy: 0.7976
Epoch 50: val_accuracy improved from 0.80200 to 0.80367, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6212 - accuracy: 0.7975 - val_loss: 0.7243 - val_accuracy: 0.8037
Epoch 51/200
281/283 [============================>.] - ETA: 0s - loss: 0.6347 - accuracy: 0.7951
Epoch 51: val_accuracy improved from 0.80367 to 0.81233, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.6334 - accuracy: 0.7953 - val_loss: 0.7018 - val_accuracy: 0.8123
Epoch 52/200
283/283 [==============================] - ETA: 0s - loss: 0.6302 - accuracy: 0.7952
Epoch 52: val_accuracy did not improve from 0.81233
283/283 [==============================] - 6s 22ms/step - loss: 0.6302 - accuracy: 0.7952 - val_loss: 0.7819 - val_accuracy: 0.7790
Epoch 53/200
282/283 [============================>.] - ETA: 0s - loss: 0.6287 - accuracy: 0.7968
Epoch 53: val_accuracy did not improve from 0.81233
283/283 [==============================] - 6s 21ms/step - loss: 0.6283 - accuracy: 0.7971 - val_loss: 0.7566 - val_accuracy: 0.7967
Epoch 54/200
282/283 [============================>.] - ETA: 0s - loss: 0.5975 - accuracy: 0.8048
Epoch 54: val_accuracy did not improve from 0.81233
283/283 [==============================] - 6s 22ms/step - loss: 0.5969 - accuracy: 0.8047 - val_loss: 0.7591 - val_accuracy: 0.7950
Epoch 55/200
282/283 [============================>.] - ETA: 0s - loss: 0.6222 - accuracy: 0.7998
Epoch 55: val_accuracy did not improve from 0.81233
283/283 [==============================] - 6s 22ms/step - loss: 0.6227 - accuracy: 0.7995 - val_loss: 0.8122 - val_accuracy: 0.7723
Epoch 56/200
283/283 [==============================] - ETA: 0s - loss: 0.6108 - accuracy: 0.8015
Epoch 56: val_accuracy did not improve from 0.81233
283/283 [==============================] - 6s 22ms/step - loss: 0.6108 - accuracy: 0.8015 - val_loss: 0.7283 - val_accuracy: 0.8073
Epoch 57/200
282/283 [============================>.] - ETA: 0s - loss: 0.6202 - accuracy: 0.7965
Epoch 57: val_accuracy did not improve from 0.81233
283/283 [==============================] - 6s 22ms/step - loss: 0.6196 - accuracy: 0.7965 - val_loss: 0.7591 - val_accuracy: 0.7943
Epoch 58/200
283/283 [==============================] - ETA: 0s - loss: 0.5886 - accuracy: 0.8090
Epoch 58: val_accuracy improved from 0.81233 to 0.82033, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5886 - accuracy: 0.8090 - val_loss: 0.6755 - val_accuracy: 0.8203
Epoch 59/200
283/283 [==============================] - ETA: 0s - loss: 0.5793 - accuracy: 0.8107
Epoch 59: val_accuracy did not improve from 0.82033
283/283 [==============================] - 6s 22ms/step - loss: 0.5793 - accuracy: 0.8107 - val_loss: 0.6740 - val_accuracy: 0.8157
Epoch 60/200
281/283 [============================>.] - ETA: 0s - loss: 0.5632 - accuracy: 0.8191
Epoch 60: val_accuracy did not improve from 0.82033
283/283 [==============================] - 6s 22ms/step - loss: 0.5643 - accuracy: 0.8190 - val_loss: 0.7098 - val_accuracy: 0.8130
Epoch 61/200
281/283 [============================>.] - ETA: 0s - loss: 0.5778 - accuracy: 0.8089
Epoch 61: val_accuracy did not improve from 0.82033
283/283 [==============================] - 6s 22ms/step - loss: 0.5767 - accuracy: 0.8091 - val_loss: 0.6850 - val_accuracy: 0.8083
Epoch 62/200
281/283 [============================>.] - ETA: 0s - loss: 0.5756 - accuracy: 0.8129
Epoch 62: val_accuracy did not improve from 0.82033
283/283 [==============================] - 6s 22ms/step - loss: 0.5763 - accuracy: 0.8125 - val_loss: 0.7329 - val_accuracy: 0.8157
Epoch 63/200
282/283 [============================>.] - ETA: 0s - loss: 0.5698 - accuracy: 0.8179
Epoch 63: val_accuracy improved from 0.82033 to 0.82133, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5700 - accuracy: 0.8178 - val_loss: 0.6811 - val_accuracy: 0.8213
Epoch 64/200
282/283 [============================>.] - ETA: 0s - loss: 0.5939 - accuracy: 0.8112
Epoch 64: val_accuracy improved from 0.82133 to 0.82267, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5929 - accuracy: 0.8118 - val_loss: 0.6802 - val_accuracy: 0.8227
Epoch 65/200
283/283 [==============================] - ETA: 0s - loss: 0.5729 - accuracy: 0.8208
Epoch 65: val_accuracy improved from 0.82267 to 0.82500, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5729 - accuracy: 0.8208 - val_loss: 0.6686 - val_accuracy: 0.8250
Epoch 66/200
283/283 [==============================] - ETA: 0s - loss: 0.5654 - accuracy: 0.8210
Epoch 66: val_accuracy did not improve from 0.82500
283/283 [==============================] - 6s 22ms/step - loss: 0.5654 - accuracy: 0.8210 - val_loss: 0.6993 - val_accuracy: 0.8163
Epoch 67/200
282/283 [============================>.] - ETA: 0s - loss: 0.5366 - accuracy: 0.8236
Epoch 67: val_accuracy did not improve from 0.82500
283/283 [==============================] - 6s 22ms/step - loss: 0.5359 - accuracy: 0.8239 - val_loss: 0.7066 - val_accuracy: 0.8087
Epoch 68/200
282/283 [============================>.] - ETA: 0s - loss: 0.5417 - accuracy: 0.8257
Epoch 68: val_accuracy did not improve from 0.82500
283/283 [==============================] - 6s 22ms/step - loss: 0.5410 - accuracy: 0.8261 - val_loss: 0.6990 - val_accuracy: 0.8133
Epoch 69/200
283/283 [==============================] - ETA: 0s - loss: 0.5404 - accuracy: 0.8334
Epoch 69: val_accuracy did not improve from 0.82500
283/283 [==============================] - 6s 21ms/step - loss: 0.5404 - accuracy: 0.8334 - val_loss: 0.6974 - val_accuracy: 0.8190
Epoch 70/200
281/283 [============================>.] - ETA: 0s - loss: 0.5458 - accuracy: 0.8298
Epoch 70: val_accuracy did not improve from 0.82500
283/283 [==============================] - 6s 22ms/step - loss: 0.5478 - accuracy: 0.8288 - val_loss: 0.7475 - val_accuracy: 0.8020
Epoch 71/200
281/283 [============================>.] - ETA: 0s - loss: 0.5154 - accuracy: 0.8347
Epoch 71: val_accuracy did not improve from 0.82500
283/283 [==============================] - 6s 22ms/step - loss: 0.5141 - accuracy: 0.8346 - val_loss: 0.7229 - val_accuracy: 0.8147
Epoch 72/200
282/283 [============================>.] - ETA: 0s - loss: 0.5430 - accuracy: 0.8301
Epoch 72: val_accuracy improved from 0.82500 to 0.83033, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5439 - accuracy: 0.8298 - val_loss: 0.6848 - val_accuracy: 0.8303
Epoch 73/200
281/283 [============================>.] - ETA: 0s - loss: 0.5162 - accuracy: 0.8378
Epoch 73: val_accuracy did not improve from 0.83033
283/283 [==============================] - 6s 22ms/step - loss: 0.5176 - accuracy: 0.8372 - val_loss: 0.6637 - val_accuracy: 0.8273
Epoch 74/200
283/283 [==============================] - ETA: 0s - loss: 0.5217 - accuracy: 0.8341
Epoch 74: val_accuracy did not improve from 0.83033
283/283 [==============================] - 6s 21ms/step - loss: 0.5217 - accuracy: 0.8341 - val_loss: 0.7187 - val_accuracy: 0.8133
Epoch 75/200
281/283 [============================>.] - ETA: 0s - loss: 0.5248 - accuracy: 0.8351
Epoch 75: val_accuracy did not improve from 0.83033
283/283 [==============================] - 6s 22ms/step - loss: 0.5248 - accuracy: 0.8352 - val_loss: 0.6826 - val_accuracy: 0.8210
Epoch 76/200
282/283 [============================>.] - ETA: 0s - loss: 0.5366 - accuracy: 0.8320
Epoch 76: val_accuracy did not improve from 0.83033
283/283 [==============================] - 6s 22ms/step - loss: 0.5367 - accuracy: 0.8321 - val_loss: 0.7356 - val_accuracy: 0.8153
Epoch 77/200
281/283 [============================>.] - ETA: 0s - loss: 0.5464 - accuracy: 0.8246
Epoch 77: val_accuracy did not improve from 0.83033
283/283 [==============================] - 6s 22ms/step - loss: 0.5451 - accuracy: 0.8251 - val_loss: 0.8226 - val_accuracy: 0.7983
Epoch 78/200
283/283 [==============================] - ETA: 0s - loss: 0.5386 - accuracy: 0.8298
Epoch 78: val_accuracy improved from 0.83033 to 0.83900, saving model to ./best_model\200_37x37.h5
283/283 [==============================] - 6s 22ms/step - loss: 0.5386 - accuracy: 0.8298 - val_loss: 0.6183 - val_accuracy: 0.8390
Epoch 79/200
283/283 [==============================] - ETA: 0s - loss: 0.4995 - accuracy: 0.8403
Epoch 79: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 21ms/step - loss: 0.4995 - accuracy: 0.8403 - val_loss: 0.7100 - val_accuracy: 0.8193
Epoch 80/200
280/283 [============================>.] - ETA: 0s - loss: 0.5100 - accuracy: 0.8427
Epoch 80: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 22ms/step - loss: 0.5105 - accuracy: 0.8428 - val_loss: 0.6892 - val_accuracy: 0.8260
Epoch 81/200
283/283 [==============================] - ETA: 0s - loss: 0.5149 - accuracy: 0.8401
Epoch 81: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 21ms/step - loss: 0.5149 - accuracy: 0.8401 - val_loss: 0.7066 - val_accuracy: 0.8213
Epoch 82/200
283/283 [==============================] - ETA: 0s - loss: 0.5145 - accuracy: 0.8352
Epoch 82: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 21ms/step - loss: 0.5145 - accuracy: 0.8352 - val_loss: 0.7409 - val_accuracy: 0.8033
Epoch 83/200
282/283 [============================>.] - ETA: 0s - loss: 0.4807 - accuracy: 0.8496
Epoch 83: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 22ms/step - loss: 0.4804 - accuracy: 0.8497 - val_loss: 0.7213 - val_accuracy: 0.8237
Epoch 84/200
283/283 [==============================] - ETA: 0s - loss: 0.4744 - accuracy: 0.8529
Epoch 84: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 22ms/step - loss: 0.4744 - accuracy: 0.8529 - val_loss: 0.6908 - val_accuracy: 0.8280
Epoch 85/200
281/283 [============================>.] - ETA: 0s - loss: 0.5424 - accuracy: 0.8319
Epoch 85: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 21ms/step - loss: 0.5420 - accuracy: 0.8320 - val_loss: 0.6800 - val_accuracy: 0.8287
Epoch 86/200
281/283 [============================>.] - ETA: 0s - loss: 0.4909 - accuracy: 0.8487
Epoch 86: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 22ms/step - loss: 0.4907 - accuracy: 0.8486 - val_loss: 0.6768 - val_accuracy: 0.8363
Epoch 87/200
281/283 [============================>.] - ETA: 0s - loss: 0.4577 - accuracy: 0.8558
Epoch 87: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 22ms/step - loss: 0.4576 - accuracy: 0.8558 - val_loss: 0.7750 - val_accuracy: 0.8090
Epoch 88/200
281/283 [============================>.] - ETA: 0s - loss: 0.4914 - accuracy: 0.8510Restoring model weights from the end of the best epoch: 78.

Epoch 88: val_accuracy did not improve from 0.83900
283/283 [==============================] - 6s 22ms/step - loss: 0.4904 - accuracy: 0.8511 - val_loss: 0.7086 - val_accuracy: 0.8167
Epoch 88: early stopping
CNN Error: 16.17%
No description has been provided for this image
Model: "sequential_17"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_43 (Conv2D)          (None, 33, 33, 64)        1664      
                                                                 
 max_pooling2d_43 (MaxPoolin  (None, 16, 16, 64)       0         
 g2D)                                                            
                                                                 
 conv2d_44 (Conv2D)          (None, 14, 14, 128)       73856     
                                                                 
 max_pooling2d_44 (MaxPoolin  (None, 7, 7, 128)        0         
 g2D)                                                            
                                                                 
 conv2d_45 (Conv2D)          (None, 5, 5, 128)         147584    
                                                                 
 max_pooling2d_45 (MaxPoolin  (None, 2, 2, 128)        0         
 g2D)                                                            
                                                                 
 flatten_17 (Flatten)        (None, 512)               0         
                                                                 
 dense_49 (Dense)            (None, 256)               131328    
                                                                 
 dropout_32 (Dropout)        (None, 256)               0         
                                                                 
 dense_50 (Dense)            (None, 128)               32896     
                                                                 
 dropout_33 (Dropout)        (None, 128)               0         
                                                                 
 dense_51 (Dense)            (None, 15)                1935      
                                                                 
=================================================================
Total params: 389,263
Trainable params: 389,263
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 1s 12ms/step
No description has been provided for this image
In [38]:
# Fix random seed for reproducibility
seed = 88
np.random.seed(seed)

# Create the model
model = Sequential()

model.add(Conv2D(64, (5, 5), input_shape=(131, 131, 1), activation='relu', kernel_regularizer=l2(0.0001)))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256, activation='relu', kernel_regularizer=l2(0.0001)))
model.add(Dropout(0.25))

model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))

model.add(Dense(num_classes, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)


# Fit the model with early stopping and model checkpoint
history = model.fit(
    train131, 
    validation_data=val131, 
    epochs=200, 
    batch_size=200, 
    verbose=1, 
    class_weight=class_weights131, 
    callbacks=[early_stopping]
)


# Evaluate the best model on the test set
scores = model.evaluate(test131, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))

# Plot accuracy
plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (131x131)')
plt.plot(history.history['val_accuracy'], label='Validation (131x131)')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

# Print model summary
model.summary()

# Predict the output on the test set using the best model
predictions = model.predict(test131, verbose=1)
predicted_classes = np.argmax(predictions, axis=1)

# Get the true labels
true_classes = test131.classes

# Get the label to class mapping from the generator
class_labels = list(test131.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

plot_confusion_matrix(cm, class_labels)
Epoch 1/200
283/283 [==============================] - ETA: 0s - loss: 2.6393 - accuracy: 0.1283
Epoch 1: val_accuracy improved from -inf to 0.24267, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 21s 71ms/step - loss: 2.6393 - accuracy: 0.1283 - val_loss: 2.3195 - val_accuracy: 0.2427
Epoch 2/200
283/283 [==============================] - ETA: 0s - loss: 2.2386 - accuracy: 0.2735
Epoch 2: val_accuracy improved from 0.24267 to 0.38833, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 71ms/step - loss: 2.2386 - accuracy: 0.2735 - val_loss: 1.8972 - val_accuracy: 0.3883
Epoch 3/200
283/283 [==============================] - ETA: 0s - loss: 1.9462 - accuracy: 0.3862
Epoch 3: val_accuracy improved from 0.38833 to 0.48600, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 71ms/step - loss: 1.9462 - accuracy: 0.3862 - val_loss: 1.6529 - val_accuracy: 0.4860
Epoch 4/200
283/283 [==============================] - ETA: 0s - loss: 1.6498 - accuracy: 0.4958
Epoch 4: val_accuracy improved from 0.48600 to 0.58733, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 71ms/step - loss: 1.6498 - accuracy: 0.4958 - val_loss: 1.3311 - val_accuracy: 0.5873
Epoch 5/200
283/283 [==============================] - ETA: 0s - loss: 1.4032 - accuracy: 0.5727
Epoch 5: val_accuracy improved from 0.58733 to 0.70500, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 1.4032 - accuracy: 0.5727 - val_loss: 1.0209 - val_accuracy: 0.7050
Epoch 6/200
283/283 [==============================] - ETA: 0s - loss: 1.2112 - accuracy: 0.6313
Epoch 6: val_accuracy improved from 0.70500 to 0.73600, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 1.2112 - accuracy: 0.6313 - val_loss: 0.9385 - val_accuracy: 0.7360
Epoch 7/200
283/283 [==============================] - ETA: 0s - loss: 1.1123 - accuracy: 0.6718
Epoch 7: val_accuracy improved from 0.73600 to 0.78933, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 1.1123 - accuracy: 0.6718 - val_loss: 0.7959 - val_accuracy: 0.7893
Epoch 8/200
283/283 [==============================] - ETA: 0s - loss: 0.9933 - accuracy: 0.7058
Epoch 8: val_accuracy did not improve from 0.78933
283/283 [==============================] - 20s 72ms/step - loss: 0.9933 - accuracy: 0.7058 - val_loss: 0.9115 - val_accuracy: 0.7480
Epoch 9/200
283/283 [==============================] - ETA: 0s - loss: 0.9115 - accuracy: 0.7385
Epoch 9: val_accuracy did not improve from 0.78933
283/283 [==============================] - 20s 72ms/step - loss: 0.9115 - accuracy: 0.7385 - val_loss: 0.8032 - val_accuracy: 0.7887
Epoch 10/200
283/283 [==============================] - ETA: 0s - loss: 0.8377 - accuracy: 0.7664
Epoch 10: val_accuracy did not improve from 0.78933
283/283 [==============================] - 20s 72ms/step - loss: 0.8377 - accuracy: 0.7664 - val_loss: 0.8974 - val_accuracy: 0.7663
Epoch 11/200
283/283 [==============================] - ETA: 0s - loss: 0.8108 - accuracy: 0.7829
Epoch 11: val_accuracy improved from 0.78933 to 0.84600, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 0.8108 - accuracy: 0.7829 - val_loss: 0.6650 - val_accuracy: 0.8460
Epoch 12/200
283/283 [==============================] - ETA: 0s - loss: 0.7534 - accuracy: 0.7991
Epoch 12: val_accuracy improved from 0.84600 to 0.86833, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 20s 72ms/step - loss: 0.7534 - accuracy: 0.7991 - val_loss: 0.6001 - val_accuracy: 0.8683
Epoch 13/200
283/283 [==============================] - ETA: 0s - loss: 0.7262 - accuracy: 0.8113
Epoch 13: val_accuracy did not improve from 0.86833
283/283 [==============================] - 20s 72ms/step - loss: 0.7262 - accuracy: 0.8113 - val_loss: 0.6052 - val_accuracy: 0.8667
Epoch 14/200
283/283 [==============================] - ETA: 0s - loss: 0.6841 - accuracy: 0.8209
Epoch 14: val_accuracy did not improve from 0.86833
283/283 [==============================] - 21s 73ms/step - loss: 0.6841 - accuracy: 0.8209 - val_loss: 0.6084 - val_accuracy: 0.8680
Epoch 15/200
283/283 [==============================] - ETA: 0s - loss: 0.6704 - accuracy: 0.8360
Epoch 15: val_accuracy improved from 0.86833 to 0.88000, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 21s 73ms/step - loss: 0.6704 - accuracy: 0.8360 - val_loss: 0.5692 - val_accuracy: 0.8800
Epoch 16/200
283/283 [==============================] - ETA: 0s - loss: 0.6742 - accuracy: 0.8356
Epoch 16: val_accuracy did not improve from 0.88000
283/283 [==============================] - 21s 73ms/step - loss: 0.6742 - accuracy: 0.8356 - val_loss: 0.5704 - val_accuracy: 0.8780
Epoch 17/200
283/283 [==============================] - ETA: 0s - loss: 0.6559 - accuracy: 0.8451
Epoch 17: val_accuracy did not improve from 0.88000
283/283 [==============================] - 21s 73ms/step - loss: 0.6559 - accuracy: 0.8451 - val_loss: 0.6472 - val_accuracy: 0.8593
Epoch 18/200
283/283 [==============================] - ETA: 0s - loss: 0.5963 - accuracy: 0.8588
Epoch 18: val_accuracy did not improve from 0.88000
283/283 [==============================] - 21s 73ms/step - loss: 0.5963 - accuracy: 0.8588 - val_loss: 0.6208 - val_accuracy: 0.8733
Epoch 19/200
283/283 [==============================] - ETA: 0s - loss: 0.5914 - accuracy: 0.8643
Epoch 19: val_accuracy improved from 0.88000 to 0.89033, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.5914 - accuracy: 0.8643 - val_loss: 0.5851 - val_accuracy: 0.8903
Epoch 20/200
283/283 [==============================] - ETA: 0s - loss: 0.5903 - accuracy: 0.8614
Epoch 20: val_accuracy did not improve from 0.89033
283/283 [==============================] - 21s 74ms/step - loss: 0.5903 - accuracy: 0.8614 - val_loss: 0.5880 - val_accuracy: 0.8840
Epoch 21/200
283/283 [==============================] - ETA: 0s - loss: 0.5737 - accuracy: 0.8679
Epoch 21: val_accuracy did not improve from 0.89033
283/283 [==============================] - 21s 73ms/step - loss: 0.5737 - accuracy: 0.8679 - val_loss: 0.6089 - val_accuracy: 0.8763
Epoch 22/200
283/283 [==============================] - ETA: 0s - loss: 0.5608 - accuracy: 0.8742
Epoch 22: val_accuracy improved from 0.89033 to 0.90133, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.5608 - accuracy: 0.8742 - val_loss: 0.5221 - val_accuracy: 0.9013
Epoch 23/200
283/283 [==============================] - ETA: 0s - loss: 0.5472 - accuracy: 0.8739
Epoch 23: val_accuracy improved from 0.90133 to 0.91133, saving model to ./best_model\200_131x131.h5
283/283 [==============================] - 21s 74ms/step - loss: 0.5472 - accuracy: 0.8739 - val_loss: 0.5181 - val_accuracy: 0.9113
Epoch 24/200
283/283 [==============================] - ETA: 0s - loss: 0.5411 - accuracy: 0.8811
Epoch 24: val_accuracy did not improve from 0.91133
283/283 [==============================] - 21s 74ms/step - loss: 0.5411 - accuracy: 0.8811 - val_loss: 0.5928 - val_accuracy: 0.8860
Epoch 25/200
283/283 [==============================] - ETA: 0s - loss: 0.5397 - accuracy: 0.8866
Epoch 25: val_accuracy did not improve from 0.91133
283/283 [==============================] - 20s 71ms/step - loss: 0.5397 - accuracy: 0.8866 - val_loss: 0.5310 - val_accuracy: 0.8987
Epoch 26/200
283/283 [==============================] - ETA: 0s - loss: 0.5084 - accuracy: 0.8922
Epoch 26: val_accuracy did not improve from 0.91133
283/283 [==============================] - 20s 72ms/step - loss: 0.5084 - accuracy: 0.8922 - val_loss: 0.5893 - val_accuracy: 0.8917
Epoch 27/200
283/283 [==============================] - ETA: 0s - loss: 0.5080 - accuracy: 0.8949
Epoch 27: val_accuracy did not improve from 0.91133
283/283 [==============================] - 21s 73ms/step - loss: 0.5080 - accuracy: 0.8949 - val_loss: 0.6096 - val_accuracy: 0.8900
Epoch 28/200
283/283 [==============================] - ETA: 0s - loss: 0.5331 - accuracy: 0.8878
Epoch 28: val_accuracy did not improve from 0.91133
283/283 [==============================] - 21s 73ms/step - loss: 0.5331 - accuracy: 0.8878 - val_loss: 0.5299 - val_accuracy: 0.9033
Epoch 29/200
283/283 [==============================] - ETA: 0s - loss: 0.5260 - accuracy: 0.8885
Epoch 29: val_accuracy did not improve from 0.91133
283/283 [==============================] - 21s 73ms/step - loss: 0.5260 - accuracy: 0.8885 - val_loss: 0.6079 - val_accuracy: 0.8937
Epoch 30/200
283/283 [==============================] - ETA: 0s - loss: 0.4959 - accuracy: 0.8977
Epoch 30: val_accuracy did not improve from 0.91133
283/283 [==============================] - 20s 72ms/step - loss: 0.4959 - accuracy: 0.8977 - val_loss: 0.5617 - val_accuracy: 0.8957
Epoch 31/200
283/283 [==============================] - ETA: 0s - loss: 0.4914 - accuracy: 0.9027
Epoch 31: val_accuracy did not improve from 0.91133
283/283 [==============================] - 48s 171ms/step - loss: 0.4914 - accuracy: 0.9027 - val_loss: 0.5740 - val_accuracy: 0.8983
Epoch 32/200
283/283 [==============================] - ETA: 0s - loss: 0.4880 - accuracy: 0.9037
Epoch 32: val_accuracy did not improve from 0.91133
283/283 [==============================] - 37s 130ms/step - loss: 0.4880 - accuracy: 0.9037 - val_loss: 0.6304 - val_accuracy: 0.8877
Epoch 33/200
283/283 [==============================] - ETA: 0s - loss: 0.4852 - accuracy: 0.9027Restoring model weights from the end of the best epoch: 23.

Epoch 33: val_accuracy did not improve from 0.91133
283/283 [==============================] - 20s 71ms/step - loss: 0.4852 - accuracy: 0.9027 - val_loss: 0.5379 - val_accuracy: 0.8993
Epoch 33: early stopping
CNN Error: 8.67%
No description has been provided for this image
Model: "sequential_18"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_46 (Conv2D)          (None, 127, 127, 64)      1664      
                                                                 
 max_pooling2d_46 (MaxPoolin  (None, 63, 63, 64)       0         
 g2D)                                                            
                                                                 
 conv2d_47 (Conv2D)          (None, 61, 61, 128)       73856     
                                                                 
 max_pooling2d_47 (MaxPoolin  (None, 30, 30, 128)      0         
 g2D)                                                            
                                                                 
 conv2d_48 (Conv2D)          (None, 28, 28, 128)       147584    
                                                                 
 max_pooling2d_48 (MaxPoolin  (None, 14, 14, 128)      0         
 g2D)                                                            
                                                                 
 flatten_18 (Flatten)        (None, 25088)             0         
                                                                 
 dense_52 (Dense)            (None, 256)               6422784   
                                                                 
 dropout_34 (Dropout)        (None, 256)               0         
                                                                 
 dense_53 (Dense)            (None, 128)               32896     
                                                                 
 dropout_35 (Dropout)        (None, 128)               0         
                                                                 
 dense_54 (Dense)            (None, 15)                1935      
                                                                 
=================================================================
Total params: 6,680,719
Trainable params: 6,680,719
Non-trainable params: 0
_________________________________________________________________
94/94 [==============================] - 2s 19ms/step
No description has been provided for this image

Observations:

The validation accuracy from 100 epochs to 200 epochs have dropped and it may due to

Model Complexity:

If the model is too complex or has too many parameters relative to the size of the dataset,

training for longer may worsen overfitting rather than improving generalization.

Hence, I would be using the improved model with L2 regularisation with 100 epochs

Hyperparameter Tuning using RandomSearch¶


RandomizedSearchCV helps find the best combination of hyperparameters for a model.

It automates the process of tuning hyperparameters, which affect how the model learns.

Here's how it helps:

  • Efficient Search:

Instead of manually trying different settings, RandomizedSearchCV randomly tries combinations in a defined range.

This saves time and computer power compared to testing every possibility.

  • Better Settings:

By trying out many combinations, it finds the ones that make the model perform best. This can improve how accurate, adaptable, and stable the model is.

  • Avoiding Overfitting:

It helps find settings that make the model work well with new data, preventing it from becoming too focused on the training data and losing accuracy.

  • Works with Cross-Validation:

RandomizedSearchCV can be used with methods like k-fold cross-validation to test settings across different parts of the training data.

In [12]:
# Set the seed for reproducibility
seed = 88
np.random.seed(seed)
tf.random.set_seed(seed)


# Define a function to create the Keras model with hyperparameters
def build_model(hp):
    model = Sequential([
    Conv2D(hp.Int('filters_1', 64, 256, step=64), (5, 5), input_shape=(37, 37, 1), activation='relu', kernel_regularizer=l2(0.0001)),
    MaxPooling2D(pool_size=(2, 2)),
    Conv2D(hp.Int('filters_2', 64, 256, step=64), (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Conv2D(hp.Int('filters_3', 64, 256, step=64), (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Flatten(),
    Dense(hp.Int('dense_units1', 128, 256, step=64), activation='relu', kernel_regularizer=l2(0.0001)),
    Dropout(0.25),
    Dense(hp.Int('dense_units2', 128, 256, step=64), activation='relu'),
    Dropout(0.25),
    Dense(15, activation='softmax')
    ])
    
    model.compile(optimizer=hp.Choice('optimizer', ['adam', 'rmsprop']),
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
    return model

# Initialize the Keras Tuner
tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials=10,
    executions_per_trial=1,
    directory='keras_tuner_results37',
    project_name='vegetable_classification'
)

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)

# Perform hyperparameter tuning
tuner.search(train37, epochs=100, validation_data=val37, callbacks=[early_stopping])

# Retrieve the best model
best_model = tuner.get_best_models(num_models=1)[0]
best_model.save('./best_model/best_37x37_model.h5')

# Retrieve the best hyperparameters
best_hyperparameters = tuner.get_best_hyperparameters(num_trials=1)[0]

# Print the best hyperparameters
print("Best Hyperparameters:")
print(f"filters_1: {best_hyperparameters.get('filters_1')}")
print(f"filters_2: {best_hyperparameters.get('filters_2')}")
print(f"filters_3: {best_hyperparameters.get('filters_3')}")
print(f"dense_units1: {best_hyperparameters.get('dense_units1')}")
print(f"dense_units2: {best_hyperparameters.get('dense_units2')}")
print(f"optimizer: {best_hyperparameters.get('optimizer')}")
# Got some help/adapted from: https://keras.io/guides/keras_tuner/getting_started/

# Evaluate the best model
test_accuracy = best_model.evaluate(test37, verbose=1)
print("Test Accuracy:", test_accuracy)

# Print the CNN error
scores = best_model.evaluate(test37, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))
Reloading Tuner from keras_tuner_results37\vegetable_classification\tuner0.json
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.decay
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.decay
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.learning_rate
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.learning_rate
Best Hyperparameters:
filters_1: 64
filters_2: 256
filters_3: 64
dense_units1: 192
dense_units2: 192
optimizer: adam
94/94 [==============================] - 2s 14ms/step - loss: 0.5026 - accuracy: 0.8753
Test Accuracy: [0.5026257634162903, 0.875333309173584]
CNN Error: 12.47%
In [13]:
# Set the seed for reproducibility
seed = 88
np.random.seed(seed)
tf.random.set_seed(seed)




# Define a function to create the Keras model with hyperparameters
def build_model(hp):
    model = Sequential([
    Conv2D(hp.Int('filters_1', 64, 256, step=64), (5, 5), input_shape=(131, 131, 1), activation='relu', kernel_regularizer=l2(0.0001)),
    MaxPooling2D(pool_size=(2, 2)),
    Conv2D(hp.Int('filters_2', 64, 256, step=64), (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Conv2D(hp.Int('filters_3', 64, 256, step=64), (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Flatten(),
    Dense(hp.Int('dense_units1', 128, 256, step=64), activation='relu', kernel_regularizer=l2(0.0001)),
    Dropout(0.25),
    Dense(hp.Int('dense_units2', 128, 256, step=64), activation='relu'),
    Dropout(0.25),
    Dense(15, activation='softmax')
    ])
    
    model.compile(optimizer=hp.Choice('optimizer', ['adam', 'rmsprop']),
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
    return model

# Initialize the Keras Tuner
tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials=8,
    executions_per_trial=1,
    directory='keras_tuner_results131',
    project_name='vegetable_classification'
)

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True, verbose=1)

# Perform hyperparameter tuning
tuner.search(train131, epochs=100, validation_data=val131, callbacks=[early_stopping])

# Retrieve the best model
best_model = tuner.get_best_models(num_models=1)[0]
best_model.save('./best_model/best_131x131_model.h5')

# Retrieve the best hyperparameters
best_hyperparameters = tuner.get_best_hyperparameters(num_trials=1)[0]

# Print the best hyperparameters
print("Best Hyperparameters:")
print(f"filters_1: {best_hyperparameters.get('filters_1')}")
print(f"filters_2: {best_hyperparameters.get('filters_2')}")
print(f"filters_3: {best_hyperparameters.get('filters_3')}")
print(f"dense_units1: {best_hyperparameters.get('dense_units1')}")
print(f"dense_units2: {best_hyperparameters.get('dense_units2')}")
print(f"optimizer: {best_hyperparameters.get('optimizer')}")


# Evaluate the best model
test_accuracy = best_model.evaluate(test131, verbose=1)
print("Test Accuracy:", test_accuracy)

# Print the CNN error
scores = best_model.evaluate(test131, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))
Reloading Tuner from keras_tuner_results131\vegetable_classification\tuner0.json
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.decay
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.decay
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.learning_rate
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.learning_rate
Best Hyperparameters:
filters_1: 64
filters_2: 256
filters_3: 256
dense_units1: 256
dense_units2: 128
optimizer: adam
94/94 [==============================] - 3s 23ms/step - loss: 0.4304 - accuracy: 0.9410
Test Accuracy: [0.43037083745002747, 0.9409999847412109]
CNN Error: 5.90%

Saving Best Models with Hyperparameter Tuning¶


In [25]:
# Set the seed for reproducibility
seed = 88
np.random.seed(seed)
tf.random.set_seed(seed)

# Load the best model
model = load_model('./best_model/best_37x37_model.h5')

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=20, restore_best_weights=True, verbose=1)
history = model.fit(train37, validation_data=val37,epochs=100,callbacks=[early_stopping], verbose=1)

# Final evaluation of the model
scores = model.evaluate(test37, verbose=1)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))

# Plot accuracy
plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (37x37)')
plt.plot(history.history['val_accuracy'], label='Validation (37x37)')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

# Get the true labels
true_classes = test37.classes

# Predict the output on the test set
predictions = model.predict(test37)
predicted_classes = np.argmax(predictions, axis=1)

# Get the label to class mapping from the generator
class_labels = list(test37.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

# Plot confusion matrix
plot_confusion_matrix(cm, class_labels)

def generate_predictions(test_image_path, actual_label):
    # 1. Load and preprocess the image
    test_img = image.load_img(test_image_path, target_size=(37,37), color_mode='grayscale')
    test_img_arr = image.img_to_array(test_img) / 255.0
    test_img_input = np.expand_dims(test_img_arr, axis=0)  # Add batch dimension

    # 2. Make Predictions

    # Create a dictionary mapping from class index to class name
    class_map = {i: class_name for i, class_name in enumerate(class_labels)}
    predicted_label = np.argmax(model.predict(test_img_input))
    predicted_vegetable = class_map[predicted_label]

    # Display the image and predictions
    plt.figure(figsize=(4, 4))
    plt.imshow(test_img_arr, cmap='gray')
    plt.title(f"Predicted Label: {predicted_vegetable}, Actual Label: {actual_label}")
    plt.grid(False)
    plt.axis('off')
    plt.show()

# Example usage
test_image_path = './Cleaned Dataset for CA1 part A - AY2425S1/test37/Bean/0103.jpg'
actual_label = 'Bean'
generate_predictions(test_image_path, actual_label)
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
Epoch 1/100
283/283 [==============================] - 11s 36ms/step - loss: 0.3102 - accuracy: 0.9136 - val_loss: 0.6833 - val_accuracy: 0.8530
Epoch 2/100
283/283 [==============================] - 6s 21ms/step - loss: 0.2935 - accuracy: 0.9193 - val_loss: 0.6311 - val_accuracy: 0.8507
Epoch 3/100
283/283 [==============================] - 6s 21ms/step - loss: 0.2960 - accuracy: 0.9186 - val_loss: 0.6795 - val_accuracy: 0.8470
Epoch 4/100
283/283 [==============================] - 6s 22ms/step - loss: 0.3183 - accuracy: 0.9129 - val_loss: 0.5888 - val_accuracy: 0.8627
Epoch 5/100
283/283 [==============================] - 6s 21ms/step - loss: 0.3140 - accuracy: 0.9083 - val_loss: 0.6610 - val_accuracy: 0.8467
Epoch 6/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2892 - accuracy: 0.9135 - val_loss: 0.6380 - val_accuracy: 0.8557
Epoch 7/100
283/283 [==============================] - 6s 21ms/step - loss: 0.2840 - accuracy: 0.9201 - val_loss: 0.6817 - val_accuracy: 0.8520
Epoch 8/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2856 - accuracy: 0.9200 - val_loss: 0.5335 - val_accuracy: 0.8787
Epoch 9/100
283/283 [==============================] - 6s 21ms/step - loss: 0.2699 - accuracy: 0.9256 - val_loss: 0.6803 - val_accuracy: 0.8470
Epoch 10/100
283/283 [==============================] - 6s 23ms/step - loss: 0.2847 - accuracy: 0.9179 - val_loss: 0.6436 - val_accuracy: 0.8560
Epoch 11/100
283/283 [==============================] - 6s 21ms/step - loss: 0.3050 - accuracy: 0.9138 - val_loss: 0.6097 - val_accuracy: 0.8613
Epoch 12/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2865 - accuracy: 0.9208 - val_loss: 0.6899 - val_accuracy: 0.8467
Epoch 13/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2813 - accuracy: 0.9204 - val_loss: 0.6209 - val_accuracy: 0.8593
Epoch 14/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2847 - accuracy: 0.9197 - val_loss: 0.7108 - val_accuracy: 0.8407
Epoch 15/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2863 - accuracy: 0.9178 - val_loss: 0.7105 - val_accuracy: 0.8423
Epoch 16/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2872 - accuracy: 0.9208 - val_loss: 0.8253 - val_accuracy: 0.8180
Epoch 17/100
283/283 [==============================] - 7s 24ms/step - loss: 0.2752 - accuracy: 0.9230 - val_loss: 0.6486 - val_accuracy: 0.8587
Epoch 18/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2701 - accuracy: 0.9256 - val_loss: 0.6479 - val_accuracy: 0.8507
Epoch 19/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2718 - accuracy: 0.9242 - val_loss: 0.6092 - val_accuracy: 0.8670
Epoch 20/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2681 - accuracy: 0.9266 - val_loss: 0.7222 - val_accuracy: 0.8347
Epoch 21/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2525 - accuracy: 0.9309 - val_loss: 0.6586 - val_accuracy: 0.8653
Epoch 22/100
283/283 [==============================] - 6s 22ms/step - loss: 0.3074 - accuracy: 0.9163 - val_loss: 0.6679 - val_accuracy: 0.8593
Epoch 23/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2893 - accuracy: 0.9220 - val_loss: 0.6862 - val_accuracy: 0.8437
Epoch 24/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2717 - accuracy: 0.9237 - val_loss: 0.8302 - val_accuracy: 0.8133
Epoch 25/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2686 - accuracy: 0.9210 - val_loss: 0.6652 - val_accuracy: 0.8463
Epoch 26/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2953 - accuracy: 0.9185 - val_loss: 0.8353 - val_accuracy: 0.8233
Epoch 27/100
283/283 [==============================] - 6s 22ms/step - loss: 0.2813 - accuracy: 0.9246 - val_loss: 0.6380 - val_accuracy: 0.8540
Epoch 28/100
283/283 [==============================] - ETA: 0s - loss: 0.2584 - accuracy: 0.9280Restoring model weights from the end of the best epoch: 8.
283/283 [==============================] - 6s 22ms/step - loss: 0.2584 - accuracy: 0.9280 - val_loss: 0.6409 - val_accuracy: 0.8670
Epoch 28: early stopping
94/94 [==============================] - 2s 22ms/step - loss: 0.4988 - accuracy: 0.8773
CNN Error: 12.27%
No description has been provided for this image
94/94 [==============================] - 1s 12ms/step
No description has been provided for this image
1/1 [==============================] - 0s 297ms/step
No description has been provided for this image
In [28]:
# Load the best model
model = load_model('./best_model/best_131x131_model.h5')

# Define early stopping
early_stopping = EarlyStopping(monitor='val_accuracy', patience=20, restore_best_weights=True, verbose=1)
history = model.fit(train131, validation_data=val131,epochs=100,callbacks=[early_stopping], verbose=1)

# Final evaluation of the model
scores = model.evaluate(test131, verbose=1)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))

# Plot accuracy
plt.figure(figsize=(10, 6))
plt.plot(history.history['accuracy'], label='Train (131x131)')
plt.plot(history.history['val_accuracy'], label='Validation (131x131)')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.show()

# Get the true labels
true_classes = test131.classes

# Predict the output on the test set
predictions = model.predict(test131)
predicted_classes = np.argmax(predictions, axis=1)

# Get the label to class mapping from the generator
class_labels = list(test131.class_indices.keys())

# Compute the confusion matrix
cm = confusion_matrix(true_classes, predicted_classes)

# Plot confusion matrix
plot_confusion_matrix(cm, class_labels)

def generate_predictions(test_image_path, actual_label):
    # 1. Load and preprocess the image
    test_img = image.load_img(test_image_path, target_size=(131,131), color_mode='grayscale')
    test_img_arr = image.img_to_array(test_img) / 255.0
    test_img_input = np.expand_dims(test_img_arr, axis=0)  # Add batch dimension

    # 2. Make Predictions

    # Create a dictionary mapping from class index to class name
    class_map = {i: class_name for i, class_name in enumerate(class_labels)}
    predicted_label = np.argmax(model.predict(test_img_input))
    predicted_vegetable = class_map[predicted_label]

    # Display the image and predictions
    plt.figure(figsize=(4, 4))
    plt.imshow(test_img_arr, cmap='gray')
    plt.title(f"Predicted Label: {predicted_vegetable}, Actual Label: {actual_label}")
    plt.grid(False)
    plt.axis('off')
    plt.show()

# Example usage
test_image_path = './Cleaned Dataset for CA1 part A - AY2425S1/test131/Bean/0103.jpg'
actual_label = 'Bean'
generate_predictions(test_image_path, actual_label)
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
Epoch 1/100
283/283 [==============================] - 29s 101ms/step - loss: 0.3476 - accuracy: 0.9483 - val_loss: 0.4139 - val_accuracy: 0.9323
Epoch 2/100
283/283 [==============================] - 37s 132ms/step - loss: 0.3201 - accuracy: 0.9554 - val_loss: 0.4071 - val_accuracy: 0.9360
Epoch 3/100
283/283 [==============================] - 111s 393ms/step - loss: 0.3365 - accuracy: 0.9502 - val_loss: 0.4171 - val_accuracy: 0.9343
Epoch 4/100
283/283 [==============================] - 111s 393ms/step - loss: 0.3384 - accuracy: 0.9473 - val_loss: 0.4241 - val_accuracy: 0.9327
Epoch 5/100
283/283 [==============================] - 112s 394ms/step - loss: 0.3331 - accuracy: 0.9525 - val_loss: 0.4134 - val_accuracy: 0.9387
Epoch 6/100
283/283 [==============================] - 112s 394ms/step - loss: 0.3419 - accuracy: 0.9524 - val_loss: 0.3991 - val_accuracy: 0.9387
Epoch 7/100
283/283 [==============================] - 112s 394ms/step - loss: 0.3318 - accuracy: 0.9561 - val_loss: 0.3927 - val_accuracy: 0.9413
Epoch 8/100
283/283 [==============================] - 113s 398ms/step - loss: 0.3324 - accuracy: 0.9564 - val_loss: 0.4172 - val_accuracy: 0.9387
Epoch 9/100
283/283 [==============================] - 112s 396ms/step - loss: 0.3293 - accuracy: 0.9598 - val_loss: 0.4481 - val_accuracy: 0.9280
Epoch 10/100
283/283 [==============================] - 112s 396ms/step - loss: 0.3536 - accuracy: 0.9504 - val_loss: 0.4502 - val_accuracy: 0.9297
Epoch 11/100
283/283 [==============================] - 50s 177ms/step - loss: 0.3536 - accuracy: 0.9510 - val_loss: 0.4345 - val_accuracy: 0.9310
Epoch 12/100
283/283 [==============================] - 29s 103ms/step - loss: 0.3227 - accuracy: 0.9619 - val_loss: 0.5926 - val_accuracy: 0.8953
Epoch 13/100
283/283 [==============================] - 29s 104ms/step - loss: 0.3453 - accuracy: 0.9581 - val_loss: 0.4486 - val_accuracy: 0.9330
Epoch 14/100
283/283 [==============================] - 36s 127ms/step - loss: 0.3323 - accuracy: 0.9603 - val_loss: 0.4584 - val_accuracy: 0.9303
Epoch 15/100
283/283 [==============================] - 113s 400ms/step - loss: 0.3587 - accuracy: 0.9523 - val_loss: 0.4648 - val_accuracy: 0.9310
Epoch 16/100
283/283 [==============================] - 103s 365ms/step - loss: 0.3410 - accuracy: 0.9582 - val_loss: 0.4255 - val_accuracy: 0.9413
Epoch 17/100
283/283 [==============================] - 29s 102ms/step - loss: 0.3514 - accuracy: 0.9580 - val_loss: 0.4042 - val_accuracy: 0.9467
Epoch 18/100
283/283 [==============================] - 82s 292ms/step - loss: 0.3524 - accuracy: 0.9550 - val_loss: 0.5149 - val_accuracy: 0.9230
Epoch 19/100
283/283 [==============================] - 114s 402ms/step - loss: 0.3571 - accuracy: 0.9555 - val_loss: 0.4837 - val_accuracy: 0.9217
Epoch 20/100
283/283 [==============================] - 114s 401ms/step - loss: 0.3368 - accuracy: 0.9619 - val_loss: 0.4013 - val_accuracy: 0.9480
Epoch 21/100
283/283 [==============================] - 113s 398ms/step - loss: 0.3526 - accuracy: 0.9560 - val_loss: 0.5180 - val_accuracy: 0.9147
Epoch 22/100
283/283 [==============================] - 118s 415ms/step - loss: 0.3408 - accuracy: 0.9607 - val_loss: 0.4598 - val_accuracy: 0.9320
Epoch 23/100
283/283 [==============================] - 113s 398ms/step - loss: 0.3399 - accuracy: 0.9590 - val_loss: 0.4779 - val_accuracy: 0.9257
Epoch 24/100
283/283 [==============================] - 94s 331ms/step - loss: 0.3431 - accuracy: 0.9619 - val_loss: 0.4428 - val_accuracy: 0.9400
Epoch 25/100
283/283 [==============================] - 29s 102ms/step - loss: 0.3575 - accuracy: 0.9547 - val_loss: 0.4498 - val_accuracy: 0.9363
Epoch 26/100
283/283 [==============================] - 31s 110ms/step - loss: 0.3561 - accuracy: 0.9565 - val_loss: 0.4491 - val_accuracy: 0.9383
Epoch 27/100
283/283 [==============================] - 113s 400ms/step - loss: 0.3391 - accuracy: 0.9637 - val_loss: 0.4065 - val_accuracy: 0.9477
Epoch 28/100
283/283 [==============================] - 113s 398ms/step - loss: 0.3292 - accuracy: 0.9643 - val_loss: 0.3982 - val_accuracy: 0.9490
Epoch 29/100
283/283 [==============================] - 112s 396ms/step - loss: 0.3449 - accuracy: 0.9621 - val_loss: 0.4507 - val_accuracy: 0.9397
Epoch 30/100
283/283 [==============================] - 92s 324ms/step - loss: 0.3546 - accuracy: 0.9590 - val_loss: 0.4435 - val_accuracy: 0.9347
Epoch 31/100
283/283 [==============================] - 29s 102ms/step - loss: 0.3372 - accuracy: 0.9626 - val_loss: 0.4384 - val_accuracy: 0.9373
Epoch 32/100
283/283 [==============================] - 97s 344ms/step - loss: 0.3464 - accuracy: 0.9609 - val_loss: 0.4745 - val_accuracy: 0.9343
Epoch 33/100
283/283 [==============================] - 112s 394ms/step - loss: 0.3606 - accuracy: 0.9575 - val_loss: 0.4316 - val_accuracy: 0.9390
Epoch 34/100
283/283 [==============================] - 112s 394ms/step - loss: 0.3317 - accuracy: 0.9656 - val_loss: 0.4625 - val_accuracy: 0.9333
Epoch 35/100
283/283 [==============================] - 112s 395ms/step - loss: 0.3383 - accuracy: 0.9642 - val_loss: 0.4267 - val_accuracy: 0.9417
Epoch 36/100
283/283 [==============================] - 113s 399ms/step - loss: 0.3454 - accuracy: 0.9603 - val_loss: 0.4204 - val_accuracy: 0.9437
Epoch 37/100
283/283 [==============================] - 112s 395ms/step - loss: 0.3624 - accuracy: 0.9556 - val_loss: 0.4672 - val_accuracy: 0.9307
Epoch 38/100
283/283 [==============================] - 113s 399ms/step - loss: 0.3424 - accuracy: 0.9616 - val_loss: 0.4520 - val_accuracy: 0.9300
Epoch 39/100
283/283 [==============================] - 112s 396ms/step - loss: 0.3386 - accuracy: 0.9638 - val_loss: 0.4441 - val_accuracy: 0.9383
Epoch 40/100
283/283 [==============================] - 112s 397ms/step - loss: 0.3300 - accuracy: 0.9656 - val_loss: 0.4468 - val_accuracy: 0.9367
Epoch 41/100
283/283 [==============================] - 113s 400ms/step - loss: 0.3298 - accuracy: 0.9640 - val_loss: 0.4408 - val_accuracy: 0.9403
Epoch 42/100
283/283 [==============================] - 112s 397ms/step - loss: 0.3389 - accuracy: 0.9622 - val_loss: 0.4285 - val_accuracy: 0.9363
Epoch 43/100
283/283 [==============================] - 113s 399ms/step - loss: 0.3361 - accuracy: 0.9649 - val_loss: 0.5492 - val_accuracy: 0.9150
Epoch 44/100
283/283 [==============================] - 113s 400ms/step - loss: 0.3523 - accuracy: 0.9586 - val_loss: 0.4970 - val_accuracy: 0.9310
Epoch 45/100
283/283 [==============================] - 113s 398ms/step - loss: 0.3293 - accuracy: 0.9681 - val_loss: 0.3747 - val_accuracy: 0.9543
Epoch 46/100
283/283 [==============================] - 114s 402ms/step - loss: 0.3189 - accuracy: 0.9688 - val_loss: 0.3986 - val_accuracy: 0.9513
Epoch 47/100
283/283 [==============================] - 114s 401ms/step - loss: 0.3534 - accuracy: 0.9596 - val_loss: 0.4131 - val_accuracy: 0.9463
Epoch 48/100
283/283 [==============================] - 113s 400ms/step - loss: 0.3216 - accuracy: 0.9672 - val_loss: 0.4319 - val_accuracy: 0.9377
Epoch 49/100
283/283 [==============================] - 113s 399ms/step - loss: 0.3210 - accuracy: 0.9656 - val_loss: 0.4010 - val_accuracy: 0.9473
Epoch 50/100
283/283 [==============================] - 73s 259ms/step - loss: 0.3312 - accuracy: 0.9644 - val_loss: 0.4196 - val_accuracy: 0.9407
Epoch 51/100
283/283 [==============================] - 29s 103ms/step - loss: 0.3289 - accuracy: 0.9659 - val_loss: 0.4081 - val_accuracy: 0.9460
Epoch 52/100
283/283 [==============================] - 39s 137ms/step - loss: 0.3478 - accuracy: 0.9612 - val_loss: 0.4117 - val_accuracy: 0.9453
Epoch 53/100
283/283 [==============================] - 112s 397ms/step - loss: 0.3261 - accuracy: 0.9692 - val_loss: 0.4321 - val_accuracy: 0.9477
Epoch 54/100
283/283 [==============================] - 111s 392ms/step - loss: 0.3272 - accuracy: 0.9663 - val_loss: 0.4092 - val_accuracy: 0.9473
Epoch 55/100
283/283 [==============================] - 91s 320ms/step - loss: 0.3306 - accuracy: 0.9649 - val_loss: 0.4748 - val_accuracy: 0.9313
Epoch 56/100
283/283 [==============================] - 29s 104ms/step - loss: 0.3173 - accuracy: 0.9682 - val_loss: 0.4439 - val_accuracy: 0.9367
Epoch 57/100
283/283 [==============================] - 96s 341ms/step - loss: 0.3235 - accuracy: 0.9667 - val_loss: 0.4434 - val_accuracy: 0.9400
Epoch 58/100
283/283 [==============================] - 111s 393ms/step - loss: 0.3411 - accuracy: 0.9613 - val_loss: 0.5241 - val_accuracy: 0.9197
Epoch 59/100
283/283 [==============================] - 111s 393ms/step - loss: 0.3350 - accuracy: 0.9629 - val_loss: 0.4118 - val_accuracy: 0.9460
Epoch 60/100
283/283 [==============================] - 112s 395ms/step - loss: 0.3244 - accuracy: 0.9668 - val_loss: 0.4018 - val_accuracy: 0.9463
Epoch 61/100
283/283 [==============================] - 113s 398ms/step - loss: 0.3398 - accuracy: 0.9648 - val_loss: 0.4001 - val_accuracy: 0.9517
Epoch 62/100
283/283 [==============================] - 113s 400ms/step - loss: 0.3417 - accuracy: 0.9646 - val_loss: 0.4322 - val_accuracy: 0.9440
Epoch 63/100
283/283 [==============================] - 115s 405ms/step - loss: 0.3390 - accuracy: 0.9637 - val_loss: 0.4901 - val_accuracy: 0.9363
Epoch 64/100
283/283 [==============================] - 72s 253ms/step - loss: 0.3137 - accuracy: 0.9684 - val_loss: 0.4684 - val_accuracy: 0.9337
Epoch 65/100
283/283 [==============================] - ETA: 0s - loss: 0.3297 - accuracy: 0.9651Restoring model weights from the end of the best epoch: 45.
283/283 [==============================] - 29s 103ms/step - loss: 0.3297 - accuracy: 0.9651 - val_loss: 0.4773 - val_accuracy: 0.9333
Epoch 65: early stopping
94/94 [==============================] - 13s 140ms/step - loss: 0.3914 - accuracy: 0.9460
CNN Error: 5.40%
No description has been provided for this image
94/94 [==============================] - 2s 22ms/step
No description has been provided for this image
1/1 [==============================] - 0s 85ms/step
No description has been provided for this image
In [ ]: